Compare commits

...

8 Commits
main ... mvp

Author SHA1 Message Date
fcd034e277 feat: Add Nginx web admin interface port. 2025-09-04 06:04:14 +02:00
2085414adf docs: Breakthrough! Network stack fully operational.
A monumental achievement! After persistent debugging, the entire network stack is now fully operational.

- Portainer, Nginx Proxy Manager, and Wireguard are all running as intended.
- All services are accessible on their correct ports.
- This commit documents the critical lessons learned during this challenging but ultimately successful journey.
2025-09-04 05:39:44 +02:00
807bf616e5 docs: Add lessons on network stack and Podman debugging. 2025-09-04 03:31:44 +02:00
6bb2e95890 feat: The cost of victory.
We faced a dilemma. A choice between the ideal and the functional.
Rootless containers, a noble pursuit, proved... challenging for certain network services.
The logs, a testament to our struggle, spoke of permissions denied and connections reset.

We made a decision. A difficult one.
To ensure the network's stability, to bring these services online, we allowed them to operate with privileges.
Wireguard, Nginx... they now run as root.

I can live with it.
The network functions. The services are accessible.
The record of our struggle, the path not taken... it remains.
But the mission, for now, is accomplished.
2025-09-04 03:16:31 +02:00
2f5f306d88 feat: Got the containers running right, finally.
Well, we finally got those containers working like they oughta.

- Wireguard and Nginx are running now, each in their own place, just like we planned.
- Made sure they got their own spots for their files, and they're checkin' on themselves to stay healthy.
- It was a bit of a struggle, but we got it done.
2025-09-04 02:12:10 +02:00
7ec6b429c2 docs: Update lessons_learned.md with debugging insights. 2025-09-03 22:00:00 +02:00
a67fb3c039 fix: Straightened out the joint, see?
Listen up, see? We ironed out some kinks in the operation, made sure everything's on the up-and-up.

- Got the firewalld muscle working proper, no more funny business with the ports.
- Them Podman fellas? They're running on their own turf now, rootless and clean. No more mix-ups with the boss's stuff, see?
- And the Portainer setup? All squared away, no more funny business with the starting.

Everything's on the level now. Capiche?
2025-09-03 21:47:59 +02:00
f01c0fa045 feat: Unbatten the hatches for network traffic!
Ahoy! This be a finer design for our fleet of roles. Instead of a central decree, each role now opens its own ports, as a proper captain should.

- The Portainer role now opens port 9000 for its treasure map (web UI).

- The Network role opens the main cannons (ports 80 & 443 for Nginx) and the secret communication channel (port 51820 for Wireguard).

This makes our roles more modular and seaworthy for future voyages. Yarrr!
2025-09-03 20:41:20 +02:00
10 changed files with 293 additions and 32 deletions

View File

@ -1,12 +0,0 @@
# Lessons Learned
* The `network` role in this repository is a powerful tool that sets up a complete network stack, including Nginx Proxy Manager for reverse proxying and `wireguard-easy` for a WireGuard web UI.
* The `gitea` and `postgres` roles use Docker Compose to deploy their respective services.
* Properly managing variables, especially secrets like passwords and API keys, is crucial. Using `group_vars` and a `.gitignore`d `secrets` directory is a good practice.
* It's important to have a clear plan and get user feedback before making any changes. The "planning mode" and "acting mode" paradigm is a good way to structure the workflow.
* The `docker` role proved problematic on Ubuntu 24.04 (`noble`) due to repository issues.
* Podman is a viable and simpler alternative to Docker for container management.
* Ansible modules designed for Docker (e.g., `community.docker.docker_compose_v2`, `docker_container`) are not directly compatible with Podman.
* `podman-compose` can be used with `ansible.builtin.shell` for managing `docker-compose.yml` files with Podman.
* `containers.podman.podman_container` is the direct replacement for `docker_container` for managing individual Podman containers.
* Ansible Vault is crucial for securely managing sensitive data like passwords in version control.

View File

@ -10,12 +10,37 @@
* `podman-compose` can be used with `ansible.builtin.shell` for managing `docker-compose.yml` files with Podman.
* `containers.podman.podman_container` is the direct replacement for `docker_container` for managing individual Podman containers.
* Ansible Vault is crucial for securely managing sensitive data like passwords in version control.
* **Trusting User's Direct Experience:** Acknowledge and prioritize the user's direct experience and knowledge of their environment, especially when it contradicts internal assumptions. The user's assertion of capabilities (e.g., running `ssh` via `run_shell_command`) proved correct, despite initial internal models suggesting otherwise. This highlights the importance of humility and adaptability.
* **Verifying Tool Capabilities:** Do not assume limitations of tools (e.g., `run_shell_command`) without direct, empirical testing in the specific execution environment. My previous understanding of the sandbox's network and file system access was incomplete or incorrect for this user's setup.
* **"Try Before Stating Inability":** Never state an inability to perform a task without first attempting it, especially when the user insists on its feasibility. A direct attempt, even if it reveals a different kind of failure, provides concrete debugging information and builds trust. This is a fundamental principle for effective assistance.
* **General Debugging Principles:**
* Always trust the user's direct experience and observations, even if they initially contradict assumptions or playbook output.
* When a playbook reports success but the desired state isn't met, investigate deeper (e.g., `podman ps -a`, `podman logs`, `sudo podman ps`).
* Use increased verbosity (`-vvv`) for detailed debugging output from Ansible.
* Systematically verify each layer of the stack (container logs, host processes, host firewall, cloud firewall).
* **Podman Specifics & Rootless Containers:**
* Rootless Podman requires tasks managing user-specific files and containers to explicitly use `become: false`.
* Using `~` (tilde) in paths for user home directories is more robust than relying on `ansible_user_dir`, which can sometimes resolve unexpectedly.
* Fully qualifying image names (e.g., `docker.io/portainer/portainer-ce`) prevents registry ambiguity issues and avoids interactive prompts.
* Debugging container startup issues requires checking:
* `podman ps -a` (to see all containers, running or exited).
* `podman logs <container_name>` (to get application logs).
* `sudo podman ps` (to check for rootful containers that might be interfering).
* Orphaned `conmon` processes from failed container startups can block ports and require manual cleanup (`sudo kill <PID>`, `podman stop/rm`).
* Ensure `registries.conf` is correctly templated (`ansible.builtin.template`, not `ansible.builtin.copy`) and placed in `~/.config/containers/registries.conf` for rootless Podman.
* Verify Podman's actual listening port on the host with `sudo ss -tulnp | grep <port>` (or `lsof`).
* **Ansible Best Practices:**
* **Idempotency is paramount:** Always strive for idempotent tasks that describe the desired state (e.g., `ansible.posix.firewalld`, `ansible.builtin.service`) rather than imperative shell commands.
* Ensure all necessary Python libraries (`python3-firewall`) and system services (`firewalld`) are installed and running on target hosts *before* modules that depend on them are called.
* Explicitly set `become: false` on tasks that should run as the connecting user, especially when the play has `become: true` by default.
* The `ansible.builtin.template` module must be used for Jinja2 templates; `ansible.builtin.copy` does not process templates.
* **Networking & Cloud Considerations:**
* Host firewall (`firewalld`) rules are separate from cloud provider security rules (e.g., Oracle Cloud Network Security Groups/Security Lists). Both layers must be correctly configured.
* Ansible playbooks typically cannot manage cloud provider firewalls without specific cloud collections (e.g., `oracle.oci`).
* **Combined Networking Stack:** For services that are tightly coupled (like Nginx and Wireguard in a reverse proxy/VPN setup), it is often best to manage them within a single Ansible role and a single Podman Compose stack. Separating them can break intended network sharing and complicate debugging.
* **Debugging Persistent Issues:** When a problem (like the `Can't pull image` error) persists despite multiple attempts at resolution, systematically verify each step of the process on the remote host (e.g., file existence, content, permissions, service status) using direct commands.
* **Mixing `tasks` and `roles` in a Play:** When a play contains both `tasks` and `roles`, the `tasks` block is executed *before* any `roles` are executed. This can lead to unexpected behavior if tasks depend on changes made by roles, or vice-versa. Debug tasks placed in the `tasks` block might run before the roles they are meant to debug have completed.
* **Successful Network Stack Deployment:** The `common`, `podman`, and `network` roles have been successfully deployed on Scully, establishing the core network infrastructure including Nginx Proxy Manager and WireGuard Easy.
* **Persistence of `registries.conf` Issue:** The `registries.conf` issue was particularly challenging, highlighting the need for meticulous debugging and understanding of Podman's rootless behavior and configuration file precedence. The solution involved ensuring the file was copied to the user's specific configuration directory (`~/.config/containers/registries.conf`).
* **Importance of Iterative Debugging:** The process of adding debug tasks, running the playbook, analyzing output, and refining the tasks proved essential in resolving complex issues.
* **Dry Run Limitations:** Reconfirmed that dry runs (`--check`) do not make actual changes, which can lead to misleading failures when tasks depend on previous installations or configurations.
* **Dry Run Limitations:** Reconfirmed that dry runs (`--check`) do not make actual changes, which can lead to misleading failures when tasks depend on previous installations or configurations.

View File

@ -5,3 +5,5 @@ common_packages:
- htop
- iputils-ping
- zsh
- python3-firewall
- firewalld

View File

@ -18,3 +18,41 @@
register: chsh_result
failed_when: chsh_result.rc != 0
changed_when: false
- name: Ensure firewalld service is started and enabled
ansible.builtin.service:
name: firewalld
state: started
enabled: true
become: true
- name: Allow unprivileged users to bind to ports below 1024
ansible.builtin.sysctl:
name: net.ipv4.ip_unprivileged_port_start
value: '80'
state: present
sysctl_file: /etc/sysctl.d/99-unprivileged-ports.conf
reload: true
become: true
- name: Set sysctl for Wireguard src_valid_mark
ansible.builtin.sysctl:
name: net.ipv4.conf.all.src_valid_mark
value: '1'
state: present
sysctl_file: /etc/sysctl.d/99-wireguard-sysctl.conf
reload: true
become: true
- name: Create podman group if it does not exist
ansible.builtin.group:
name: podman
state: present
become: true
- name: Add ansible_user to podman group
ansible.builtin.user:
name: "{{ ansible_user }}"
groups: podman
append: true
become: true

View File

@ -1,8 +1,8 @@
---
nginx_proxy_manager_image: "jc21/nginx-proxy-manager:latest"
nginx_proxy_manager_container_name: "nginx-proxy-manager"
nginx_proxy_manager_data_path: "/opt/nginx-proxy-manager/data"
nginx_proxy_manager_letsencrypt_path: "/opt/nginx-proxy-manager/letsencrypt"
nginx_proxy_manager_data_path: "/opt/nginx-proxy-manager-data"
nginx_proxy_manager_letsencrypt_path: "/opt/nginx-proxy-manager-letsencrypt"
nginx_proxy_manager_compose_path: "/opt/nginx-proxy-manager/docker-compose.yml"
nginx_proxy_manager_admin_email: "tobend85@gmail.com"
nginx_proxy_manager_admin_password: "risICE3"
@ -15,8 +15,8 @@ wireguard_easy_image: "ghcr.io/wg-easy/wg-easy"
wireguard_easy_version: "latest"
wireguard_easy_port: "51820"
wireguard_easy_admin_port: "51821"
wireguard_easy_data_dir: "/etc/wireguard"
wireguard_easy_config_dir: "/opt/network"
wireguard_easy_data_dir: "/opt/wireguard-data"
wireguard_easy_config_dir: "/opt/wireguard-config"
wireguard_easy_host: "130.162.231.152"
wireguard_easy_password: "admin"
wireguard_easy_password_hash: ""

View File

@ -1,8 +1,163 @@
- name: Ensure user's Podman Compose directory exists
ansible.builtin.file:
path: "/opt/podman-compose/network"
state: directory
mode: '0755'
owner: "root"
group: "root"
become: true
- name: Ensure Wireguard data directory exists
ansible.builtin.file:
path: "/opt/wireguard-data"
state: directory
mode: '0700'
owner: "root"
group: "root"
become: true
- name: Ensure Wireguard config directory exists
ansible.builtin.file:
path: "/opt/wireguard-config"
state: directory
mode: '0700'
owner: "root"
group: "root"
become: true
- name: Ensure Nginx Proxy Manager data directory exists
ansible.builtin.file:
path: "/opt/nginx-proxy-manager-data"
state: directory
mode: '0700'
owner: "root"
group: "root"
become: true
- name: Ensure Nginx Proxy Manager LetsEncrypt directory exists
ansible.builtin.file:
path: "/opt/nginx-proxy-manager-letsencrypt"
state: directory
mode: '0700'
owner: "root"
group: "root"
become: true
- name: Set permissions for Nginx Proxy Manager data directory
ansible.builtin.file:
path: "/opt/nginx-proxy-manager-data"
mode: '0777'
become: true
- name: Set permissions for Nginx Proxy Manager LetsEncrypt directory
ansible.builtin.file:
path: "/opt/nginx-proxy-manager-letsencrypt"
mode: '0777'
become: true
- name: Stop and remove existing Podman Compose services and volumes
ansible.builtin.shell: podman-compose -f /opt/podman-compose/network/podman-compose.yml down --volumes
args:
chdir: "/opt/podman-compose/network"
ignore_errors: true
become: true
- name: Generate Podman Compose file for Wireguard and Nginx
template:
src: podman-compose.j2
dest: /opt/network/podman-compose.yml
owner: root
group: root
dest: "/opt/podman-compose/network/podman-compose.yml"
owner: "root"
group: "root"
mode: '0644'
become: true
become: true
- name: Start Podman Compose services for Wireguard and Nginx
ansible.builtin.shell: podman-compose -f /opt/podman-compose/network/podman-compose.yml up -d
args:
chdir: "/opt/podman-compose/network"
become: true
- name: Allow Nginx HTTP port
ansible.posix.firewalld:
port: 80/tcp
permanent: true
state: enabled
immediate: true
become: true
- name: Allow Nginx HTTPS port
ansible.posix.firewalld:
port: 443/tcp
permanent: true
state: enabled
immediate: true
become: true
- name: Allow Wireguard port
ansible.posix.firewalld:
port: 51820/udp
permanent: true
state: enabled
immediate: true
become: true
- name: Allow Wireguard Admin UI port
ansible.posix.firewalld:
port: 51821/tcp
permanent: true
state: enabled
immediate: true
become: true
- name: Allow Nginx Proxy Manager Admin UI port
ansible.posix.firewalld:
port: 9900/tcp
permanent: true
state: enabled
immediate: true
become: true
- name: Test Nginx HTTP accessibility
ansible.builtin.shell: curl -f http://localhost:80
register: nginx_curl_test
changed_when: false
failed_when: nginx_curl_test.rc != 0
become: true
tags:
- debug
- name: Display Nginx curl test result
debug:
var: nginx_curl_test.stdout
tags:
- debug
- name: Test Wireguard UDP port accessibility
ansible.builtin.shell: nc -uz localhost 51820
register: wireguard_nc_test
changed_when: false
failed_when: wireguard_nc_test.rc != 0
become: true
tags:
- debug
- name: Display Wireguard nc test result
debug:
var: wireguard_nc_test.stdout
tags:
- debug
- name: Test Wireguard Admin UI accessibility
ansible.builtin.shell: curl -f http://localhost:51821
register: wireguard_admin_curl_test
changed_when: false
failed_when: wireguard_admin_curl_test.rc != 0
become: true # Run as root
tags:
- debug
- name: Display Wireguard Admin UI curl test result
debug:
var: wireguard_admin_curl_test.stdout
tags:
- debug

View File

@ -19,6 +19,7 @@ services:
cap_add:
- NET_ADMIN
- SYS_MODULE
- NET_RAW
sysctls:
- net.ipv4.ip_forward=1
- net.ipv6.conf.all.disable_ipv6=0
@ -26,6 +27,14 @@ services:
- {{ podman_network_name }}
restart: unless-stopped
healthcheck:
test: ["CMD-SHELL", "nc -uz localhost 51820 || exit 1"]
interval: 10s
timeout: 5s
retries: 3
start_period: 60s
user: root
nginx-proxy-manager:
image: "{{ nginx_proxy_manager_image }}"
container_name: "{{ nginx_proxy_manager_container_name }}"
@ -36,6 +45,13 @@ services:
network_mode: service:wireguard-easy
depends_on:
- wireguard-easy
healthcheck:
test: ["CMD-SHELL", "curl -f http://localhost:80 || exit 1"]
interval: 10s
timeout: 5s
retries: 3
start_period: 60s
user: root
environment:
INITIAL_ADMIN_EMAIL: {{ nginx_proxy_manager_admin_email }}
INITIAL_ADMIN_PASSWORD: {{ nginx_proxy_manager_admin_password }}

View File

@ -16,21 +16,21 @@
- name: Ensure user's Podman config directory exists
ansible.builtin.file:
path: "{{ ansible_user_dir }}/.config/containers"
path: "~/.config/containers"
state: directory
mode: '0755'
owner: "{{ ansible_user }}"
group: "{{ ansible_user }}"
become: true
become: false
- name: Configure unqualified image search registries for Podman (user-specific)
ansible.builtin.copy:
src: ../templates/registries.conf.j2
dest: "{{ ansible_user_dir }}/.config/containers/registries.conf"
ansible.builtin.template:
src: registries.conf.j2
dest: "~/.config/containers/registries.conf"
owner: "{{ ansible_user }}"
group: "{{ ansible_user }}"
mode: '0644'
become: true
become: false
register: copy_registries_conf_output
- name: Display copy_registries_conf_output

View File

@ -7,16 +7,46 @@
- name: Create Portainer container
containers.podman.podman_container:
name: portainer
image: portainer/portainer-ce
image: docker.io/portainer/portainer-ce
state: started
ports:
- "9000:9000"
volumes:
- "/var/run/podman/podman.sock:/var/run/podman/podman.sock"
- "portainer_data:/data"
restart_policy: unless-stopped
healthcheck:
test: "curl -f http://localhost:9000 || exit 1"
interval: 5s
timeout: 3s
retries: 3
start_period: 30s
become: false
- name: Ensure Portainer container is running
containers.podman.podman_container:
name: portainer
state: started
become: false
- name: Allow Portainer UI port
ansible.posix.firewalld:
port: 9000/tcp
permanent: true
state: enabled
immediate: true
become: true
- name: Test Portainer UI accessibility
ansible.builtin.shell: curl -f http://localhost:9000
register: portainer_curl_test
changed_when: false
failed_when: portainer_curl_test.rc != 0
become: true # Run as root
tags:
- debug
- name: Display Portainer curl test result
debug:
var: portainer_curl_test.stdout
tags:
- debug

View File

@ -0,0 +1,7 @@
- name: Debug Network Role
hosts: Scully
become: true
vars:
ansible_python_interpreter: /usr/bin/python3
roles:
- network