The main file has been split to avoid to have one big file wiht eveything. This will make the maintenance and review changes more confortable.
| Filename | Description & Content |
|---|---|
| docker-compose.yml | Network, Volumes and Include the rest of the docker-compose files. |
| docker-compose.critical.yml | CA Server & AdGuardHome |
| docker-compose.proxy.yml | Traefik |
| docker-compose.frontend.yml | Greenhouse Main Page. |
| docker-compose.apps.yml | NoIP, TeamSpeak & Traefik Dummy Whoami |
| docker-compose.vpn.yml | Wireguard EZ |
| docker-compose.zero.trust.yml | Twingate Connector |
| docker-compose.monitoring.yml | RPi Monitor |
| docker-compose.cloud.yml | Jellyfin and Nexcloud with his own Redis & Postgres. |
On Docker folder, there is another env folder the is placed template. All configurations will be placed in here and each time you want to create a new environment, just clone them and adapt.
Each service has his own config file to avoid a huge config file that make the maintenance a little hard.
| Task | Description | Type |
|---|---|---|
| On Stop Raspberry Pi | Delete the docker compose of Greenhouse. It will execute user.stop.sh script. |
Systemd |
| On Start Raspberry Pi | Will delete and start the docker compose of Greenhouse. It will execute user.start.sh script. |
Systemd |
| Backup | Will trigger a daily backup using Borg. It will execute backup.sh script. |
Cron Job |
Specified on file reboot.cron, a task with the reboot of the RPI in daily bases.
We will use Borg to handle the RPI backups. On daily bases and with a configuration file specified, will handle the backups. The initial configuration of Borg has to be done manually, the setup of the repository. Once done that, just create the config file and the passphrase somewhere.
The only port at the moment that is required to be included on the firewall is the connectivity for port for Wireguard, port 51820.
Search and open Windows Defender Firewall. Go to Advanced settings
Go to Inbound Rules and New Rule... as we are allowing external connections.
Click on Port and add the list of ports provided above + click on TCP.
Let's go at the moment with Allow the connection option.
Once completed, you will see the new rule on the Inbound Rules window. In this sample, Greenhouse Ports.
Open the terminal and modify the next file
> sudo nano /etc/pf.confFor each port that you want to open, add the next line:
pass in proto tcp from any to any port [PORT]The next lines would be use to activate/deactivate the rule:
> sudo pfctl -f /etc/pf.conf
# Activate
> sudo pfctl -e
# Deactivate
> sudo pfctl -dTo test if it is working or not:
> sudo lsof -i :[PORT]
# The expected response should something similar to this:
> sudo lsof -i :51820
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
com.docke 3836 usename 200u IPv6 0x0000000000000001 0t0 UDP *:51820
> sudo lsof -i :51821
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
com.docke 3836 usename 199u IPv6 0x0000000000000001 0t0 TCP *:51821 (LISTEN)How to Forward traffic from NoIp to the computer
The way how the Internet provider maintain our IP can be different. They can update our IP when we restart the router or in any moment.
There are lot of webages that can provide this info (ipinfo.io, ipaddress.my, showmyip.com, whatismyip.com...).
Why NoIp? They offer by free one hostname which we will use to forward our traffic for, here it comes, FREE.
The screenshot of below shows how the NoIp hostname page looks like. Here you will see your hostname plus the IP where is aiming at the moment. When you are developing, you can, a, ping directly your public IP or b, use this domain.
For this, we will be using smallstep/step-ca image.
Follow their documentation page to make a initial setup of the server before continue.
Greenhouse is aiming to have your own local domain only accessible once you are connected with the VPN. Because of that, you will not need buy any domain or trust any external CA for much open-source they are.
Be aware if you are using a Raspberry Pi as I am, to check this link. In my case it happen 2 issues:
First one was fixed applying the changes on the link regarding the DB:
"db": { "type": "badger", "dataSource": "/home/step/db", "badgerFileLoadingMode": "FileIO" },And later, I had to update the permissions of my volume folders. But this is my issue as my users are not very well configured:
docker run --rm -v prod-ca-db:/data alpine chown -R 1000:1000 /data
For windows, run this command on Powershell to avoid
docker run --rm -v dev-ca-db:/home/step/db alpine chown -R 1000:1000 /home/step/db
All this trouble can be ignored if you configure their own postgres.
Add the ACME and configure the ca.json with the DB.
If everything works as expected, execute the commands of below to include the new provisioner.
# Log to the container the
> docker exec -it ca sh
# Add the new ACME provisioner. After this, ensure to restart to asure the config has been applied.
> step ca provisioner add greenhouse-acme --type ACMEWith this, should be enough to make it work!
DONT FORGET TO INCLUDE THE NEW CERTIFICATES IN YOUR DEVICES !
Additionally, I added few improvements on the ca.json. Not sure if neccesary but I will list them below:
...
"dnsNames": [
"localhost",
"ca.dev.greenhouse.ogt",
"traefik.dev.greenhouse.ogt",
"vpn.dev.greenhouse.ogt",
"adguard.dev.greenhouse.ogt",
...
"dev.greenhouse.ogt"
],
...
"policy": {
"x509": {
"allow": {
"dns": ["*.dev.greenhouse.ogt"]
},
"allowWildcardNames": false
},
"host": {
"allow": {
"dns": ["*.dev.greenhouse.ogt"]
}
}
},
...
"db": {
"type": "postgresql",
"dataSource": "",
"badgerFileLoadingMode": ""
},
...
{
"type": "ACME",
"name": "greenhouse-acme",
"claims": {
...
},
"challenges": [
"tls-alpn-01"
],
"attestationFormats": [
"apple",
"step",
"tpm"
],
"options": {
"x509": {
"enabled": true,
"durations": {
"default": "168h",
"min": "5m",
"max": "12000h"
}
},
"ssh": {}
}
}
...
"commonName": "Step Online CA for ${REPLACE THIS WITH YOUR DOMAIN}"The file we will need to import is root_ca.crt. It will be placed on Step-ca Volume, /cert folder.
Double click on the file root_ca.crt > Click on Install Certificate > Your User or All the machine (Up to you) > Browse to Trusted Root Certification Authorities. Done ❤️
After importing the certificate, restart the browsers and the "Insecure webpage" messages should disappear.
Will be used to configure the Authentication layer on the project. The plan is to configure each service exposed with it to ensure security.
The first step as the documentation page say is to go to https://your.authentik.domain/if/flow/initial-setup/ and setup your admin user. I will use it to configure mine and later disable the default admin user. Just in case.
After log in, click on Admin Interface > Directory > Users. And create a new user. Make sure to add it as User Type Internal.
Once is done, edit and add the Admin Group on the user.
For applications like AdGuardHome or just subdomains without any auth on them, you can secure them using this. The proxy (in our case Traefik) would be in charge of doing the redirections ensuring that you are logged.
To do it, Applications > Providers > New Provider > Proxy Provider
Now the Application to secure; Applications > Applications > Create
Make sure before continue that the application that you are trying to secure with OAuth2/OpenID does support it. If not, I guess using the Proxy Provider should be enough.
The first headache that Nextcloud will give is to add our custom CA certificate generated by Step-CA on the trust certificates. To do it I had to create a hook after installation and customize a little bit the entrypoint to make sure the permissions required to add the certificate are enough.
Thanks to this 2 pages I've been able to complete this. Before was giving an issue on redirect to Authentik, throwing errors with the SSL connection.
- https://help.nextcloud.com/t/add-ca-certificate-for-ldaps-on-docker-image/119386
- https://docs.nextcloud.com/server/latest/admin_manual/occ_command.html#security-commands-label
Anyway, next step, check this documentation page. Mainly you will have to download an extension app on Nextcloud called OpenID Connect user backend. It could change, but I found it going to Apps > Social & conmunication, and look for it.
On you find it, just click on Enable ❤️.
Once this is install, add this next line on the ${greenhouse_nextcloud_vol_config}/config.php file:
...
'allow_local_remote_servers' => true,
...Now when you access to the login page, a new way to login will appear, Authentik:
Depending if you want to redirect directly to Authentik instead of seeing this screen, you can run this on docker:
# On Docker UI
$ ./occ config:app:set --value=0 user_oidc allow_multiple_user_backends
# In the bash console
$ sudo docker exec --user www-data -it nextcloud-aio-nextcloud php occ config:app:set --value=0 user_oidc allow_multiple_user_backendsThe last bit of configuration required is on Authentik side. Take special attention on this side of the documentation. Once created the Nextcloud Profile, make sure to add it on the Provider. It could end on error of communication or Nextcloud errors not allowing you to login.
Aftar that bit, everytime you login, the new user will appear on Nextcloud.
They required a licence (that for personal use does not make ANY sense) to user OAuth2 on them. The issue that I have, as the project is generating their own CA certificate, was refusing connections due to the lack of trust on the certificates. Just follow this link to fix it. I adding a new volume with the Step-CA certificates and add then with the command.
...
volumes:
- ${greenhouse_pdf_volume_tessdata:-${PWD}/pdf/${ENV}/tessdata}:/usr/share/tessdata
- ${greenhouse_pdf_volume_configs:-${PWD}/pdf/${ENV}/configs}:/configs
- ${greenhouse_pdf_volume_logs:-${PWD}/pdf/${ENV}/logs}:/logs
- ${greenhouse_pdf_volume_pipeline:-${PWD}/pdf/${ENV}/pipeline}:/pipeline
- ${greenhouse_ca_volume_certs:-${PWD}/step-ca/${ENV}/certs}:/usr/local/share/ca-certificates/ # <---- This line
...$ docker exec stirling-pdf update-ca-certificatesAfter that, if you have licence, it should work. If not, will redirect you to url https://your-domain/errorOAuth=oAuth2RequiresLicense.
For dev environment, the files are already committed on the repository, but at the time of doing it the deployment on your server, you will have to do it from the scratch.
The current configuration that you will have to handle are:
- DNS configuration.
- Host (in our case provided by NoIP).
- User creation.
Make sure of doing all changes before creating any user. Once the user is imported to the client, there are configurations that if vary, you will have to reimport them, like DNS changes.
A quick note over DNS configuration, is to add first the IP of Adguard, and later some extra DNS. In our case we are using the DNS provided by the EU.
I agree that using an external provider to connect to your local network is not one of the best ideas but they provide security and the most important thing. My ISP was blocking my external connections and I wasn't able to make the VPN work. Basically, Twingate offer me an opportunity of connecting to all my setup without having to expose any port of the internet or maintain any domain as I was doing with No-IP.
It's very simple once you start to play with it a little bit. In my case, only for dev, I created a Network called gh-sobremesa and attach to it 4 items as you can see on the screenshot of below:
- Dev | AdGuardHome: This resourse has the port 443 exposed plus the alias of adguard.dev.greenhouse.ogt. Basically is for be able to access to the dashboard. The IP belongs to the Traefik service, remember that is the proxy who is serving the dashboard.
- Dev | AdGuardHome DNS: As the port 53 is exposed on the service, you have to create a new Resource only for that.
- Dev | Main Page: It serves dev.greenhouse.ogt on 443.
- Dev | Traefik: It serve traefik.dev.greenhouse.ogt with his dashboard as well as the service WhoAmI. Both on the 443.
Inside the configuration you can select how you want to define the resource:

Based on value that you provided on the dropdown, later will appear or not once you are logged and connected to Twingate. Making easy to navigate between your services and hiding API or any other resources. Quite cool.
The DNS is not working as expected. Twingate is not using as DNS my internal container, is resolving using his internal DNS. I can use external or third party DNS, where I would be able to expose my DNS and use it, as a solution to this but my current ISP is not allowing me to do it...
On AdGuardHome, all request are going encrypted to Twingate, not allowing to filter the requests....
The most important configuration to perform on Adguard at the moment are the DNS Rewrite and the DNS blocklist.
This is easy, read Reddit or just a quick search on the internet and select those that seems to be the ones you like it. As simple as that.
This is the sample that I use on my dev environment. I will explain below some issues and problem that I face when I was configuring this.
The reason of having multiple lines by domain is to let AdGuardHome to provide the most efficient IP to the client. In case you are connected with VPN or Twingate or just in local. The DNS will rewrite the best option.
Traefik is being configured on the project to serve the content by domain, but using the *.dev.greenhouse.ogt will redirect as well the traffic of the CA to Traefik and it should not do that.
I really like the app for serve local movies and show that I have in my computer. In any case was really heavy for the Raspberry Pi. I was trying to watch a movie and was enough laggy to stop it.
At the moment I did only deploy Rpi Monitor to watch the resources that Jellyfin was using plus knowing how was the status of the raspberry. I will improve this part with Grafana as well as centralizing the logs. WIP.
Tool for editing PDF's and avoid to expose, upload or whatever any PDF to any weird app or webpage.
Group of IT Tools that are nice to have and same as Stirling PDF, with this you avoid any risk of uploading data to weird pages.
Raspberry Pi seems to be handling Nextcloud with the databases really well. I just make the install and perform the next changes on the configuration.
<?php
$CONFIG = array (
...
'default_language' => 'es',
'default_locale' => 'es_ES',
'default_phone_region' => 'ES',
'default_timezone' => 'Europe/Madrid',
...
'trusted_domains' =>
array (
0 => 'localhost',
1 => '*.dev.greenhouse.ogt',
2 => '127.0.0.1',
3 => '42.42.42.100',
),
'trusted_proxies' =>
array (
0 => '127.0.0.1',
1 => '42.42.42.50',
),
'forwarded_for_headers' =>
array (
0 => 'HTTP_X_FORWARDED',
1 => 'HTTP_FORWARDED_FOR',
),
...
'mail_smtpmode' => 'smtp',
'mail_sendmailmode' => 'smtp',
'mail_smtpport' => '587',
'mail_smtphost' => 'smtp.gmail.com',
'mail_domain' => 'gmail.com',
'mail_from_address' => 'dev-account',
'mail_smtpauth' => true,
'mail_smtpname' => 'dev-account@gmail.com',
'mail_smtppassword' => 'a_p@ssw0rd',
'skeletondirectory' => '',
'templatedirectory' => '',
'logo_url' => 'https://raw.githubusercontent.com/RoldanGreenhouse/ivy-automation/refs/heads/56-add-cloud-services-and-multimedia-player-on-rpi/drawio/logo.png',
'mail_domain' => 'dev.greenhouse.com',
'mail_from_address' => 'admin',
...
);After this, I just modified the theme and everything was working smoothly.
IMPORTANT for GMAIL! To make the email work, go to the Profile of the admin user and assign an email, the same given on the IMAP configuration. Is what I did to make it work.
https://www.reddit.com/r/NextCloud/comments/1jydvvb/using_gmail_as_email_server/
An issue founded when you have your own CA is that Nextcloud will not allow communications due to the lack of the CA certificate. It not trust it. I make a hook to add manually the certificate and allow the OAuth2 communication that without it, will fail.
As a tip here, I notice meanwhile I was developing, in my Windows machine, the hook works perfectly, on the Raspberry Pi, once I start the service and start to settle, after the installation, this message appear:
...
=> Searching for hook scripts (*.sh) to run, located in the folder "/docker-entrypoint-hooks.d/post-installation" ==> The script "/docker-entrypoint-hooks.d/post-installation/import_greenhouse_ca.sh" was skipped, because it lacks the executable flag ==> Skipped: the "post-installation" folder does not contain any valid scripts ...
This issue came due to the file does not have the required permissions. If you do a ls -la over the script, these are the current permissions by default:
$ ls -l/greenhouse/ivy-automation/docker/nextcloud-hooks/post-installation
total 12
drwxr-xr-x 2 greenhouse greenhouse 4096 feb 16 17:06 .
drwxr-xr-x 4 greenhouse docker 4096 feb 16 17:04 ..
-rw-r--r-- 1 greenhouse docker 712 feb 16 16:59 import_greenhouse_ca.shGiving the correct permissions to the scripts, will fix it.
$ sudo chmod +x /greenhouse/ivy-automation/docker/nextcloud-hooks/post-installation/import_greenhouse_ca.sh
...
=> Searching for hook scripts (*.sh) to run, located in the folder "/docker-entrypoint-hooks.d/post-installation"
==> Running the script (cwd: /var/www/html): "/docker-entrypoint-hooks.d/post-installation/import_greenhouse_ca.sh"
Setting variables...
Creating path </var/www/html/ca-certificates>
Duplicating Greenhouse CA certificate <greenhouse_step_ca.crt>...
Importing certificate </var/www/html/ca-certificates/greenhouse_step_ca.crt> ...
Import completed. Checking current certificates...
+------------------------+--------------------------------------------------+------------------------------------------+-------------------+--------------------------------------------------+
| File Name | Common Name | Organization | Valid Until | Issued By |
+------------------------+--------------------------------------------------+------------------------------------------+-------------------+--------------------------------------------------+
| greenhouse_step_ca.crt | Greenhouse CA Server - greenhouse.roldan Root CA | Greenhouse CA Server - greenhouse.roldan | February 14, 2036 | Greenhouse CA Server - greenhouse.roldan Root CA |
+------------------------+--------------------------------------------------+------------------------------------------+-------------------+--------------------------------------------------+
Done!
==> Finished executing the script: "/docker-entrypoint-hooks.d/post-installation/import_greenhouse_ca.sh"
...We will be using this service as Proxy Reverse. Each Service will be configured to use the correct subdomain plus to challenge against the CA using Traefik.
On the first try I went for the Official Image provided by Teamspeak, but they do not support the RPI architecture. I did a little research and looking for number and ertagh version is the one that I like the most. The bad side of this is he does not use any volume, so if I run down or similar, the configuration will go bananas :D
In any case. The port forwarding is only applied for port 9987 (voice channel), as for security reason, I will remain close access only on the local network for ports 10011 (Server Query) & 30033 (File Manager).
To configure the project, we have created the folder /greenhouse that will contain this repository with all devops configs.
See below the three of folders:
/greenhouse/
└── ivy-automation
├── ansible
│ ├── ansible.cfg
│ ├── ansible_vault_password
│ ├── inventory
│ │ ├── computers
│ │ └── host_vars
│ │ ├── debian.yml
│ │ ├── rpi.yml
│ │ ├── vault.yml (only on the machine. Ignored on repo.)
│ │ └── w3070.yml
│ ├── playbooks
│ │ ├── ping.yml
│ │ └── variable_checker.yml
│ └── ssh
│ ├── id_ansible
│ └── id_ansible.pub
├── LICENSE
├── profiles
│ └── ...
└── README.md
File that will contain the list of IP, hostnames or DNS names that Ansible will manage. On ansible.cfg file, we have added the variable inventory that contains the path for the main inventory that we will use.
all:
children:
windows:
hosts:
w3070:
linux:
hosts:
rpi:
debian:
vbox:
hosts:
debian:
greenhouse:
hosts:
w3070:
rpi:
debian:Let first add the next command to ensure that ansible is able to reach all given machines in /ansible/config/inventory file.
$ ansible all --key-file /path/to/ssh/key -i /path/to/inventory/file -m ping --limit {host-name}
# ex
$ ansible all -i inventory.yaml -m win_ping --limit w3070
$ ansible all -i inventory.yaml -m ping --limit rpiUsing as reference Official Ansible Docs for Windows Setup
# Check versions available
> winget search Microsoft.PowerShell
# Install
> winget install --id Microsoft.Powershell --source winget
> winget install --id Microsoft.Powershell.Preview --source wingetTo check the current Keys check folder \home\{user}\.ssh. Inside should be located the file known_hosts plus the keys generated.
# To generate a key, execute the next command:
$ ssh-keygen -t ed25519 -C Ansible
# To copy the ssh key to a Server
$ ssh-copy-id -i {oath of public ssh key. ie: /home/gh/.ssh/id.pub} {IP of the Server}To make the setup, we created the file inventory/host_vars/vault.yml and added all credentials to make reference to them later on playbooks.
Once created, just do ansible-vault encrypt.
$ ansible-vault encrypt --vault-password-file ansible_vault_password inventory/host_vars/vault.yml
$ ansible-vault view --vault-password-file ansible_vault_password inventory/host_vars/vault.yml
$ ansible-vault edit --vault-password-file ansible_vault_password inventory/host_vars/vault.ymlOn ansible.cfg file, we have added the variable vault_password_file that contains the password used to encrypt in vault. So it won't require to use the flag --vault-password-file ansible_vault_password anymore.
The .bashrc file includes few tiny functions that would help and make environments more comfortable.
| Variable Name | Description | Example |
|---|---|---|
BASE_GREENHOUSE_WORKSPACE |
Main folder where the repositories of Greenhouse are placed. | /c/Users/mike/Documents/Workspace |
Explanation of Commands:
systemctl start <service>: Starts the service immediately (in this case, SSH).systemctl enable <service>: Enables the service to start automatically at system boot.systemctl status <service>: Shows the current status of the service (running, stopped, etc.).systemctl is-enabled <service>: Checks if the service is enabled to start on boot.systemctl stop <service>: Stops the service immediately.systemctl disable <service>: Disables the service from starting at boot.
At the time this README file is being written, the Server where I install Proxmox does not have the capability to be connected directly to the internet using cable. So I have to do a little bit of research....
I ended up on this post. So, let's go:
The first steps require cable, just to update the packages and install wpasupplicant.
> apt update && apt install wpasupplicant
> systemctl disable wpa_supplicant
# OBVIOUSLY, replace SSIDNAME and PASSWORD by yours
> wpa_passphrase SSIDNAME PASSWORD >> /etc/wpa_supplicant/wpa_supplicant.confI have the same verification message than the offer on the Tutorial:
> dmesg | grep wlp
[ 4.021984] rtw89_8852be 0000:04:00.0 wlp4s0: renamed from wlan0Create /etc/systemd/system/wpa_supplicant.service and add configuration. As show above, it's wlp4s0.
touch /etc/systemd/system/wpa_supplicant.serviceCode:
[Unit]
Description=WPA supplicant
Before=network.target
After=dbus.service
Wants=network.target
IgnoreOnIsolate=true
[Service]
Type=dbus
BusName=fi.w1.wpa_supplicant1
ExecStart=/sbin/wpa_supplicant -u -s -c /etc/wpa_supplicant/wpa_supplicant.conf -i wlp4s0
Restart=always
[Install]
WantedBy=multi-user.target
Alias=dbus-fi.w1.wpa_supplicant1.serviceNow, enable again wpasupplicant.
> systemctl enable wpa_supplicantModify again /etc/network/interfaces file:
auto lo
iface lo inet loopback
iface enp3s0 inet manual
# This block are the lines that I edit
auto wlp4s0
iface wlp4s0 inet static
address 192.168.3.10/22
gateway 192.168.0.1
#
auto vmbr0
iface vmbr0 inet static
address 192.168.1.11/22
bridge-ports enp3s0
bridge-stp off
bridge-fd 0
source /etc/network/interfaces.d/*Restart wpasupplicant and networking services to connect wireless adapter to wifi network
> systemctl restart wpa_supplicant && systemctl restart networkingFinally, as the tutorial said, I made the last update on /etc/network/interface:
auto lo
iface lo inet loopback
iface enp3s0 inet manual
auto wlp4s0
iface wlp4s0 inet static
address 192.168.3.11/22
gateway 192.168.0.1
auto vmbr0
iface vmbr0 inet static
address 192.168.3.10/22
bridge-ports none
bridge-stp off
bridge-fd 0
auto home
iface home inet static
address 192.168.3.12/22
bridge-ports none
bridge-stp off
bridge-fd 0
hwaddress 0b:a0:e2:b6:b6:08
post-up echo 1 > /proc/sys/net/ipv4/ip_forward
post-up iptables -A FORWARD -i wlp4s0 -j ACCEPT
post-up iptables -A FORWARD -o wlp4s0 -j ACCEPT
post-up iptables -A FORWARD -i home -j ACCEPT
post-up iptables -A FORWARD -o home -j ACCEPT
source /etc/network/interfaces.d/*

































