Compare commits

..

41 Commits

Author SHA1 Message Date
789391c6e1 feat: add firewall rules for nginx and wireguard 2025-09-03 19:07:51 +02:00
95181b9ed9 docs: Mark Phase 1 as complete and update documentation 2025-09-02 22:10:40 +02:00
0550c33943 fix: Remove debug tasks from main.yml 2025-09-02 22:08:23 +02:00
91ac9bd8b4 docs: Update README and PLAN with Podman integration and current progress 2025-09-02 22:06:30 +02:00
e95ae3a430 docs: Update lessons learned with insights on task execution order 2025-09-02 22:06:04 +02:00
aabefee331 fix: Update podman-compose.j2 template to use podman_network_name in wireguard-easy service 2025-09-02 22:03:25 +02:00
e9d5d5f41f fix: Update podman-compose.j2 template to use podman_network_name 2025-09-02 21:49:48 +02:00
f3e92c0ce8 fix: Update network role to use podman-compose.j2 template 2025-09-02 21:35:05 +02:00
df85dd8747 fix: Add Restart Nginx handler to network role 2025-09-02 21:33:13 +02:00
2f29cb006b feat: Add placeholder files for Nginx configuration in network role 2025-09-02 21:30:57 +02:00
e10389d0eb fix: Quote second debug task name in main.yml to resolve YAML parsing error 2025-09-02 21:27:59 +02:00
e5e67d90b0 fix: Quote task name in main.yml to resolve YAML parsing error 2025-09-02 21:24:23 +02:00
18bfb9f3cf docs: Update lessons learned with insights on capabilities and debugging 2025-09-02 21:22:33 +02:00
0a8a563c59 debug: Add debug tasks for registries.conf copy operation 2025-09-02 21:21:11 +02:00
fbeee6de1c debug: Add temporary tasks to read registries.conf on Scully 2025-09-02 21:08:51 +02:00
fdd2017d96 fix: Add Podman user service restart after registries.conf update 2025-09-02 21:07:38 +02:00
0fd124effe fix: Add debug tasks for registries.conf verification 2025-09-02 21:04:43 +02:00
360930920e fix: Configure user-specific registries.conf for rootless Podman 2025-09-02 21:01:48 +02:00
db8928ad54 fix: Correct path to registries.conf.j2 template in podman role 2025-09-02 20:59:33 +02:00
d801752d9a fix: Re-add registries.conf configuration to podman role 2025-09-02 20:57:42 +02:00
c21c6f6af7 fix: Install podman-compose via apt 2025-09-02 20:55:57 +02:00
b62ec3ddb9 fix: Revert playbook to original state for network deployment 2025-09-02 20:50:51 +02:00
2ca5f36d11 fix: Temporarily run only podman role for installation 2025-09-02 20:49:41 +02:00
2e54ae37a2 fix: Add debug tasks to podman role for pip installation issue 2025-09-02 20:48:44 +02:00
20f56da90e fix: Add meta:clear_facts to podman role to ensure pip is found 2025-09-02 20:45:41 +02:00
fbd68fad4c feat: Re-create simplified podman role 2025-09-02 20:44:30 +02:00
e8d7a878ec fix: Install ansible collections to project-specific path 2025-09-02 20:41:39 +02:00
c9ad97aace feat: Use linux.system_roles.podman for Podman installation 2025-09-02 20:40:49 +02:00
dcc09732da fix: Explicitly set python interpreter for ansible 2025-09-02 20:38:50 +02:00
540762538b fix: Ensure podman role runs before network role 2025-09-02 20:37:49 +02:00
41dfeb0e87 feat: Configure Podman registries for image pulling 2025-09-02 20:36:28 +02:00
0cbe5daf54 feat: Clean up Docker references and align with Podman 2025-09-02 20:32:53 +02:00
9d9d07a599 docs: Update planning and lessons learned documentation 2025-09-02 18:43:41 +02:00
653b959cca feat: Update group variables with provided passwords and email 2025-09-02 18:43:22 +02:00
5b142f5c0b feat: Migrate roles to Podman 2025-09-02 18:34:05 +02:00
b227385ae5 feat: Revert playbook to focus on network role for Scully and add podman role 2025-09-02 18:06:19 +02:00
3021a122f7 feat: Focus on network role for Scully 2025-09-02 17:56:40 +02:00
48345583fa fix: Remove tree package from common role 2025-09-02 17:55:57 +02:00
7661df74c1 feat: Configure playbook to run common role on all hosts 2025-09-02 17:50:37 +02:00
f1b574353e feat: Add initial group variables 2025-09-02 17:49:17 +02:00
b58d50a974 docs: Add planning, lessons learned, requirements, and firewall documentation 2025-09-02 17:41:46 +02:00
59 changed files with 815 additions and 453 deletions

8
.gitignore vendored
View File

@ -1,6 +1,2 @@
# ---> Ansible
*.retry
private
.vscode
.ansible
.git.vault_password
secrets/

View File

@ -1 +0,0 @@
changeme

9
FIREWALL.md Normal file
View File

@ -0,0 +1,9 @@
# Firewall Configuration
Based on the deployment plan, the following ports need to be opened on the firewall for the host **Scully**:
* `80/tcp`: For HTTP traffic, primarily used by Let's Encrypt for certificate validation.
* `443/tcp`: For HTTPS traffic to access all web services.
* `51820/udp`: For the WireGuard VPN tunnel.
No ports need to be opened on the firewall for the host **Mulder**, as Gitea will be accessed through the reverse proxy on Scully.

12
LESSONS_LEARNED.md Normal file
View File

@ -0,0 +1,12 @@
# Lessons Learned
* The `network` role in this repository is a powerful tool that sets up a complete network stack, including Nginx Proxy Manager for reverse proxying and `wireguard-easy` for a WireGuard web UI.
* The `gitea` and `postgres` roles use Docker Compose to deploy their respective services.
* Properly managing variables, especially secrets like passwords and API keys, is crucial. Using `group_vars` and a `.gitignore`d `secrets` directory is a good practice.
* It's important to have a clear plan and get user feedback before making any changes. The "planning mode" and "acting mode" paradigm is a good way to structure the workflow.
* The `docker` role proved problematic on Ubuntu 24.04 (`noble`) due to repository issues.
* Podman is a viable and simpler alternative to Docker for container management.
* Ansible modules designed for Docker (e.g., `community.docker.docker_compose_v2`, `docker_container`) are not directly compatible with Podman.
* `podman-compose` can be used with `ansible.builtin.shell` for managing `docker-compose.yml` files with Podman.
* `containers.podman.podman_container` is the direct replacement for `docker_container` for managing individual Podman containers.
* Ansible Vault is crucial for securely managing sensitive data like passwords in version control.

37
PLAN.md Normal file
View File

@ -0,0 +1,37 @@
# Deployment Plan for Home Cloud
## 1. Goal
The goal is to set up a personal cloud environment on your two hosts, Mulder and Scully. This involves deploying Gitea (a self-hosted Git service) on Mulder, and Keycloak (an identity and access management solution) on Scully. All services should be accessible via HTTPS with Let's Encrypt certificates and subdomain-based routing. We will also set up a WireGuard VPN with a web interface for secure access to your network.
## 2. Phased Deployment Plan
### Phase 1: Network Infrastructure on Scully (Completed)
* **Goal:** Deploy the `common` and `network` roles on Scully. The `network` role will set up Nginx Proxy Manager (for HTTPS and subdomain routing) and WireGuard Easy (for VPN with web UI).
* **Host and Role Assignments:**
* **Scully:** `common`, `podman`, `network`
* **Configuration Files:**
* `inventory/hosts.yml`: Defines Mulder and Scully, their connection details, and role-specific variables.
* `playbooks/main.yml`: Modified to execute the `common`, `podman`, and `network` roles on Scully.
* `group_vars/all.yml`: Contains common variables like the domain name and service credentials.
* **Execution Plan:**
1. Run the playbook to deploy the `common`, `podman`, and `network` roles on Scully. (Podman and Portainer are now successfully installed).
2. After successful execution, verify the network services.
### Phase 2: Gitea and Keycloak Deployment (Next)
* **Goal:** Deploy Gitea on Mulder and Keycloak on Scully, along with their respective PostgreSQL databases.
* **Host and Role Assignments:**
* **Mulder:** `common`, `podman`, `postgres`, `gitea`
* **Scully:** `common`, `podman`, `postgres`, `keycloak` (in addition to `network`)
* **Dependencies:** This phase depends on the successful completion of Phase 1 and the availability of the domain name.
* **Next Steps:** Once Phase 1 is complete, we will update the `playbooks/main.yml` and `group_vars/all.yml` to include the `postgres`, `gitea`, and `keycloak` roles.
## 3. What We Still Need
* **Your Domain Name:** Please provide the domain name you want to use for your personal cloud (e.g., `my-cloud.com`). (Already provided as `ai-eifel.de`).
## 4. Dry-Run
Dry runs will be performed where appropriate, but direct execution will be used for tasks that require actual changes to the system.

View File

@ -1,45 +1,20 @@
# HomeCloudPlaybooks
This repository contains Ansible playbooks for setting up and configuring a home cloud environment.
My Ansible Playbooks live here
## Requirements
## Podman Integration
- Ansible 2.9 or higher
- Python 3.6 or higher
- `sshpass` installed on the control node
This project has been updated to use Podman as the container runtime instead of Docker.
## Usage
### Key Changes:
* The `docker` role has been removed.
* A custom `podman` role is used to install Podman, `podman-compose`, and `podman-docker`.
* Roles that deploy containers (e.g., `network`, `gitea`, `portainer`) have been adapted to use Podman-compatible commands and modules.
* `podman-compose` is used to manage multi-container applications defined in `podman-compose.j2` templates.
1. **Clone the repository:**
```bash
git clone https://gitea.tobjend.de/tobi/HomeCloudPlaybooks.git
cd HomeCloudPlaybooks
```
### Running Playbooks with Podman:
Ensure Podman is installed and configured on your target hosts. The playbooks will handle the installation of `podman-compose` and `podman-docker`.
2. **Install Ansible collections:**
```bash
ansible-galaxy collection install -r playbooks/requirements.yml
```
## Deployment Status
3. **Configure the inventory:**
- Copy the `inventory/hosts.yml.example` to `inventory/hosts.yml`.
- Update the `inventory/hosts.yml` file with your host information.
4. **Configure secrets:**
- This project uses Ansible Vault to manage secrets.
- Create a `vault_password.txt` file with your vault password.
- Run the playbooks using the `--vault-password-file` option:
```bash
ansible-playbook playbooks/main.yml --vault-password-file vault_password.txt
```
## Inventory Structure
The inventory is located in the `inventory` directory. The main inventory file is `hosts.yml`. The inventory is organized into groups of hosts.
## Roles
The following roles are available in the `playbooks/roles` directory:
- `gitea`: Installs and configures Gitea, a self-hosted Git service.
- ... (more roles to be documented here)
**Network Stack on Scully:** Successfully deployed! The `common`, `podman`, and `network` roles have been applied to Scully, establishing the core network infrastructure including Nginx Proxy Manager and WireGuard Easy.

13
REQUIREMENTS.md Normal file
View File

@ -0,0 +1,13 @@
# Project Requirements
* Deploy Ansible scripts to two hosts: Mulder and Scully.
* Use a Git repository for version control of the Ansible playbooks.
* Manage SSH keys securely within the project.
* Deploy Gitea on Mulder.
* Deploy Keycloak on Scully.
* Apply a `common` set of configurations to both hosts.
* Set up a reverse proxy with Nginx on Scully.
* Secure all web services with HTTPS and Let's Encrypt certificates.
* Access services via subdomains (e.g., `gitea.my-url.com`, `keycloak.my-url.com`).
* Provide a web interface for managing WireGuard.
* The user wants to be involved in the planning process and approve all changes before they are applied.

View File

@ -1,7 +1,7 @@
[defaults]
inventory = ./inventory/hosts.yml
remote_user = ubuntu
vault_password_file = ./.vault_password
private_key_file = ./private/astronomican.pem
host_key_checking = False
interpreter_python = auto_silent
roles_path = ./playbooks/roles
roles_path = ./roles

63
group_vars/all.yml Normal file
View File

@ -0,0 +1,63 @@
$ANSIBLE_VAULT;1.1;AES256
36623161633664656166313034646133383431623938626533653633376333363436306639373463
6635386137333334613737666163306565333833396133310a646662623264653561393363313237
39646230626535313963396261356334313931633863666536373332343266353637343338386361
3732373830666530330a663065363565363536616164393765326663326361373930626330623264
66383832346561376263323533343434633761393439333363316163316463316361396133663237
33393038346366653935393766353963353730393762313764663830383635666532386363343133
38333134363837386565366537636536393731316637346464613234333932386238343266613761
32353666636135343865613364613632333933653364656330306131653363636132323034623565
30323764373030316539316331363331636139366339663731333063643864323665346161383937
32383439363239616165643632303635323964323435353666343332333034663430303437353264
39366234363865333439656562343631383933636437303932396662363564343636326163323433
63373036343365633137363137613534313335633337633135346339366137653866356538383835
61346637643463343365633636663261663033336133613562366439633231313862323662623033
31616365613034393762383162623361336339313035363831613765336432336233393565646233
37653863636465626532616232326234326437643662393738326135626438663937623862326261
65613834646663666134353833316234636530366664613536353339316466356665313164323139
32663137323530366536623437376434383130353238356335626139383066313464623764326437
64636666346563303963393737393339313034383239663431613036303934353330373838343036
64353863333032343034386564373333666231303430383338363639666637623833373663333530
34623534386361626361633866386132316466653338326237323964333037636234393135396139
34353030383536383464303030373737396130313666363533363638633433383565613037393362
65616161386230646234336365356333626463363530326435366464353532323132656437343861
32623264613733643834646665333638663932386163623265643665633230326164363462636138
65343364316133646432316566313165353834646263613036633935626336633434336639343661
36623337346530366263626264653332356436386235633232353030323865313265303461643261
32343333306164653437333037343635383937643638353536383735356365653761323433363064
61663537626239303935313033643864353434636332666563346164333032333364316335623933
62643165366330326636336164393431316538323039383463313031626363346362346633616534
34343131326230633634363363316464633064626464373665316165646534303634343538393238
62313262313835303063336237303462626530323961343732303934663837653539616632396537
62346561623035363963363330663339386262353536383163663431653132643866336631356264
34636133346364613962383061376636653030626264333539336234326238316131303030303061
66336233626231363635653332366562306661303231323538313165303333663232616564613461
64366466383634633039353936353335333738343136616534306161316631613235643062366434
36356536313966356632303062353332653939356163396433353430303661353634333732323037
64643434303534316333313764653461376631666530346262373736323637616532313664303863
38383136636564346632656563646135303438373462626533336464643231353639336161643162
61306665316333633133323238636530663664653534636262646230626637386561326163653739
32303834616435313961373764373730393161626530666233373037633433396436663039346334
35663030316263306537386130313863323636643861663263623639366639353431323738646537
39363666663030373561666331333165336331653033363831383434653365633262666130303233
35306564323761356331373231343439323061376466363130616232316438383162343536353064
31643732363634616337633734386463633736323738303565313233383666363739326230633431
38396634663834353536313532393461613337663461343866333266613464623735346333313061
62383735623632353365303365396266653631333232643634356634363535323631376139383366
36333534633736343830396461393634303537356565313335646338333762326430663937636435
66663934333437653832626365646539666136616138323832353539316161656133333132633332
62633466653066376135613962346431303261303361353034393832386632626662333536626363
38353234323865653264326262653561323635383162643562646333663765326561643330666630
37333265313963616137303734356461613762343031383436343365373930316666336432613561
66316234343634613633366666373232313832323862613961306434346166383130353063373937
61626432353534653561663162663166313564626630356465653637663531303662366334353862
64306536356165616132353639383932336564656266623261643763306239623933643131636632
39636261396638313966393438643431393163646131303538386463386265333065303765616461
34666362386361346534366163323439333464313837356331306561656639653036303965373664
66653334613566393238623034376531393433366466646134346134613434623837623133656561
33353837376432396335363737373365393662633464373763376438313564386464333731383233
34316361396639613237666136313831626637646430303930653361393237353166366262343432
39653032303135383532646330343331626261313736346532633434376233613031303931356237
35306565383133653330356633336631386334396262656630663833386561353365353733656334
36373331316564363537373135643836366232343031383432633739393363616137663236616262
3235326535633839613263303665323230316433353839396465

View File

@ -1,20 +0,0 @@
$ANSIBLE_VAULT;1.1;AES256
66383465623264336234336665613539316135346265343564393666396566636137316131663731
3063396330653439623765346564616539383933393239320a323961643536303333623434353337
39303265323535633635653639656262396533383035653639643634656132653933383635613936
6263343134616462350a666665363234613864353438313663393230313534346238633731623464
65626432663566396237666232346537386332653634313137663238653631613031663038306161
64316562393664393737303336646562323436323230303835323738633435613363363836646137
32393766643936303732643164363433316239303065363438376431646131623038303238353564
39306339373137623831396238643965636162383063353238376437653236383030633335326662
61633136396461326264313339653937316332656635643230383539626136613666393438653637
64393038643934366231323632663236343932333061316533666536656461373564616235303632
61636231626533303730353563373664383337393866346437623538636130396565336639643137
38616165343833366132346138333930393838303266633038303063626364376431653665303537
36306661633133313839363630303332613164393261313139336239633964376631343732643061
66613337333465333036333534666565373261313865333539666139663735363834643031333836
65316232336363306561353339633364396638643937333830353262326138653231353863376635
37633832323861623833383936383066366639653833356465393263376335333664323863363863
63376235343461303163653662623765383530373561666365313165646161303635303536643137
62613535306661663738363062326133343734313931653534326265313238623531376430613032
356163313666656235343236333166653234

View File

@ -1,23 +1,10 @@
all:
children:
oracle-cloud-instances:
hosts:
sublimePorte:
ansible_host: 130.162.231.152
ansible_user: ubuntu
ansible_ssh_private_key_file: ~/.ssh/ora-cloud/sublime-key.key
webservices:
ansible_host: 79.76.127.110
ansible_user: ubuntu
ansible_ssh_private_key_file: ~/.ssh/ora-cloud/sublime-key.key
yunohost:
ansible_host: 141.147.24.166
ansible_user: ubuntu
ansible_ssh_private_key_file: ~/.ssh/ora-cloud/sublime-key.key
nextcloud:
ansible_host: 1.2.3.4 # Placeholder IP
ansible_user: ubuntu
ansible_ssh_private_key_file: ~/.ssh/ora-cloud/sublime-key.key
nextcloud_servers:
hosts:
nextcloud:
hosts:
Mulder:
ansible_host: 130.162.234.190
ansible_user: ubuntu
ansible_ssh_private_key_file: "{{ inventory_dir }}/../secrets/sublime-key.key"
Scully:
ansible_host: 92.5.121.208
ansible_user: ubuntu
ansible_ssh_private_key_file: "{{ inventory_dir }}/../secrets/sublime-key.key"

21
lessons_learned.md Normal file
View File

@ -0,0 +1,21 @@
# Lessons Learned
* The `network` role in this repository is a powerful tool that sets up a complete network stack, including Nginx Proxy Manager for reverse proxying and `wireguard-easy` for a WireGuard web UI.
* The `gitea` and `postgres` roles use Docker Compose to deploy their respective services.
* Properly managing variables, especially secrets like passwords and API keys, is crucial. Using `group_vars` and a `.gitignore`d `secrets` directory is a good practice.
* It's important to have a clear plan and get user feedback before making any changes. The "planning mode" and "acting mode" paradigm is a good way to structure the workflow.
* The `docker` role proved problematic on Ubuntu 24.04 (`noble`) due to repository issues.
* Podman is a viable and simpler alternative to Docker for container management.
* Ansible modules designed for Docker (e.g., `community.docker.docker_compose_v2`, `docker_container`) are not directly compatible with Podman.
* `podman-compose` can be used with `ansible.builtin.shell` for managing `docker-compose.yml` files with Podman.
* `containers.podman.podman_container` is the direct replacement for `docker_container` for managing individual Podman containers.
* Ansible Vault is crucial for securely managing sensitive data like passwords in version control.
* **Trusting User's Direct Experience:** Acknowledge and prioritize the user's direct experience and knowledge of their environment, especially when it contradicts internal assumptions. The user's assertion of capabilities (e.g., running `ssh` via `run_shell_command`) proved correct, despite initial internal models suggesting otherwise. This highlights the importance of humility and adaptability.
* **Verifying Tool Capabilities:** Do not assume limitations of tools (e.g., `run_shell_command`) without direct, empirical testing in the specific execution environment. My previous understanding of the sandbox's network and file system access was incomplete or incorrect for this user's setup.
* **"Try Before Stating Inability":** Never state an inability to perform a task without first attempting it, especially when the user insists on its feasibility. A direct attempt, even if it reveals a different kind of failure, provides concrete debugging information and builds trust. This is a fundamental principle for effective assistance.
* **Debugging Persistent Issues:** When a problem (like the `Can't pull image` error) persists despite multiple attempts at resolution, systematically verify each step of the process on the remote host (e.g., file existence, content, permissions, service status) using direct commands.
* **Mixing `tasks` and `roles` in a Play:** When a play contains both `tasks` and `roles`, the `tasks` block is executed *before* any `roles` are executed. This can lead to unexpected behavior if tasks depend on changes made by roles, or vice-versa. Debug tasks placed in the `tasks` block might run before the roles they are meant to debug have completed.
* **Successful Network Stack Deployment:** The `common`, `podman`, and `network` roles have been successfully deployed on Scully, establishing the core network infrastructure including Nginx Proxy Manager and WireGuard Easy.
* **Persistence of `registries.conf` Issue:** The `registries.conf` issue was particularly challenging, highlighting the need for meticulous debugging and understanding of Podman's rootless behavior and configuration file precedence. The solution involved ensuring the file was copied to the user's specific configuration directory (`~/.config/containers/registries.conf`).
* **Importance of Iterative Debugging:** The process of adding debug tasks, running the playbook, analyzing output, and refining the tasks proved essential in resolving complex issues.
* **Dry Run Limitations:** Reconfirmed that dry runs (`--check`) do not make actual changes, which can lead to misleading failures when tasks depend on previous installations or configurations.

View File

@ -1,3 +1,10 @@
---
- import_playbook: portainer.yml
- import_playbook: nextcloud.yml
- name: Set up network on Scully
hosts: Scully
become: true
vars:
ansible_python_interpreter: /usr/bin/python3
roles:
- common
- podman # Ensure podman is configured before network
- network
- wireguard

View File

@ -1,5 +0,0 @@
- name: Set up Nextcloud
hosts: nextcloud_servers
become: true
roles:
- nextcloud

View File

@ -1,7 +0,0 @@
- name: Set up Portainer
hosts: sublimePorte
become: true
roles:
- docker
- portainer
- openwebui

View File

@ -1,10 +0,0 @@
# requirements.yml
# This file lists the Ansible collections required by the playbooks.
# The collections are based on the commented-out roles in main.yml.
collections:
- name: community.general
version: "3.0.0"
- name: community.crypto
version: "2.0.0"

View File

@ -0,0 +1,7 @@
---
common_packages:
- git
- nano
- htop
- iputils-ping
- zsh

View File

@ -0,0 +1,20 @@
---
- name: Update apt cache
apt:
update_cache: true
cache_valid_time: 3600
become: true
- name: Install Common packages
apt:
name: "{{ common_packages }}"
state: present
become: true
- name: Set zsh as the default shell
shell: chsh -s $(which zsh) {{ ansible_user }}
become: true
when: ansible_user != "root"
register: chsh_result
failed_when: chsh_result.rc != 0
changed_when: false

View File

@ -0,0 +1,16 @@
---
# Pi-Hole container configuration
pi_hole_container_name: "pihole"
pi_hole_image: "pihole/pihole:latest"
pi_hole_host_port: "314"
pi_hole_dns_port: "53"
pi_hole_timezone: "Europe/Berlin"
pi_hole_volume_dir: "/opt/pi-hole" # Directory to store Pi-Hole data
pi_hole_web_password: "risICE3!risICE3!" # Change this to a secure password
blocklists:
- https://raw.githubusercontent.com/hagezi/dns-blocklists/main/adblock/pro.txt
- https://raw.githubusercontent.com/daylamtayari/Pi-Hole-Blocklist/master/Pi-Hole-Blocklist.txt
- https://raw.githubusercontent.com/hagezi/dns-blocklists/main/adblock/tif.txt
# Docker network configuration
docker_network_name: "pi-hole-net"

View File

@ -0,0 +1,15 @@
services:
pihole:
image: pihole/pihole:latest
ports:
- '53:53/tcp'
- '53:53/udp'
- '67:67/udp'
- '80:80/tcp'
environment:
- TZ=Europe/Berlin
- WEBPASSWORD=risICE3!risICE3!
volumes:
- './etc-pihole:/etc/pihole'
- './etc-dnsmasq.d:/etc/dnsmasq.d'
restart: unless-stopped

View File

@ -0,0 +1,3 @@
---
dependencies:
- role: portainer

View File

@ -0,0 +1,114 @@
---
- name: Ensure Pi-Hole data directory exists
file:
path: "{{ pi_hole_volume_dir }}"
state: directory
owner: root
group: root
mode: '0755'
become: true
- name: Generate Docker Compose file for Pi-Hole
template:
src: pi-hole-compose.j2
dest: /opt/pi-hole/docker-compose.yml
owner: root
group: root
mode: '0644'
become: true
- name: Ensure Docker network exists
community.docker.docker_network:
name: "{{ docker_network_name }}"
driver: bridge
state: present
- name: Ensure systemd-resolved is installed
ansible.builtin.apt:
name: systemd-resolved
state: present
become: true
- name: Disable DNSStubListener in resolved.conf
ansible.builtin.lineinfile:
path: /etc/systemd/resolved.conf
regexp: '^#?DNSStubListener='
line: 'DNSStubListener=no'
create: true
mode: '0644' # Secure file permissions
become: true
- name: Restart systemd-resolved service
ansible.builtin.service:
name: systemd-resolved
state: restarted
become: true
changed_when: false
- name: Verify port 53 is no longer in use by systemd-resolved
ansible.builtin.command: ss -tuln | grep ':53'
register: port_check
failed_when: port_check.rc == 0 and '127.0.0.53:53' in port_check.stdout
changed_when: false
become: true
- name: Ensure Docker service directory exists
file:
path: /etc/systemd/system/docker.service.d
state: directory
owner: root
group: root
mode: '0755'
become: true
- name: Add custom DNS settings to Docker service
lineinfile:
path: /etc/systemd/system/docker.service.d/docker.conf
create: true
line: |
[Service]
ExecStart=
ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock --dns 8.8.8.8 --dns 8.8.4.4
regexp: '^ExecStart='
insertafter: '^\[Service\]'
state: present
mode: '0644'
become: true
- name: Reload systemd daemon
systemd:
daemon_reload: true
become: true
- name: Restart Docker service
service:
name: docker
state: restarted
become: true
- name: Deploy Pi-Hole container using Docker Compose V2
community.docker.docker_compose_v2:
project_src: /opt/pi-hole
state: present
become: true
- name: Ensure Pi-Hole container is running
community.docker.docker_container_info:
name: "{{ pi_hole_container_name }}"
register: container_info
- name: Restart Pi-Hole container if not running
community.docker.docker_container:
name: "{{ pi_hole_container_name }}"
state: started
restart: true
when: not container_info.container.State.Running
- name: Wait for the container to be fully operational
command: docker exec {{ pi_hole_container_name }} pihole status
register: pihole_status
until: "'Pi-hole blocking is enabled' in pihole_status.stdout"
retries: 30
delay: 5
ignore_errors: true
changed_when: false

View File

@ -0,0 +1,21 @@
services:
pihole:
container_name: {{ pi_hole_container_name }}
image: {{ pi_hole_image }}
ports:
- "{{ pi_hole_host_port }}:80/tcp"
- "{{ pi_hole_dns_port }}:53/tcp"
- "{{ pi_hole_dns_port }}:53/udp"
environment:
TZ: {{ pi_hole_timezone }}
WEBPASSWORD: {{ pi_hole_web_password }}
volumes:
- "{{ pi_hole_volume_dir }}/etc-pihole:/etc/pihole"
- "{{ pi_hole_volume_dir }}/etc-dnsmasq.d:/etc/dnsmasq.d"
networks:
- {{ docker_network_name }}
restart: unless-stopped
networks:
{{ docker_network_name }}:
driver: bridge

View File

@ -1,57 +0,0 @@
---
- name: Ensure all previously installed docker packages are uninstalled
apt:
name:
- docker.io
- docker-compose
- docker-compose-v2
- docker-doc
- podman-docker
state: absent
purge: true
- name: Install dependencies
apt:
name:
- ca-certificates
- curl
state: present
- name: Download Docker repository key securely
become: true
get_url:
url: https://download.docker.com/linux/ubuntu/gpg
dest: /etc/apt/keyrings/docker.asc
mode: '0644'
force: true # Ensures updates if the key changes
- name: Add Docker repository
become: true
apt_repository:
repo: "deb [arch=amd64 signed-by=/etc/apt/keyrings/docker.asc] https://download.docker.com/linux/ubuntu {{ ansible_distribution_release }} stable"
state: present
update_cache: true
- name: Install Docker and related components
become: true
apt:
name:
- docker-ce
- docker-ce-cli
- containerd.io
- docker-buildx-plugin
- docker-compose-plugin
state: present
- name: Add user to the docker group
user:
name: "{{ ansible_user }}"
groups: docker
append: true
when: ansible_user != "root"
- name: Start and enable Docker service
service:
name: docker
state: started
enabled: true

View File

@ -0,0 +1,10 @@
# roles/gitea/defaults/main.yml
gitea_version: "latest"
gitea_container_name: "gitea"
gitea_data_path: "/opt/gitea"
gitea_port: 3000
postgres_host: "localhost"
postgres_port: 5432
postgres_db: "postgres"
postgres_user: "tobi"
postgres_password: "risICE3"

View File

@ -0,0 +1,3 @@
---
dependencies:
- role: postgres

View File

@ -0,0 +1,33 @@
- name: Create Gitea data directory
file:
path: "{{ gitea_data_path }}"
state: directory
owner: "1000"
group: "1000"
mode: '0755'
become: true
- name: Copy Docker Compose file
template:
src: docker-compose.yml.j2
dest: "{{ gitea_data_path }}/docker-compose.yml"
mode: '0644'
become: true
- name: Deploy Gitea container using Podman Compose
ansible.builtin.shell:
cmd: podman-compose -f {{ gitea_data_path }}/podman-compose.yml up -d
chdir: "{{ gitea_data_path }}"
become: true
- name: Ensure Gitea container is running
ansible.builtin.shell:
cmd: "podman ps -a --filter name={{ gitea_container_name }} --format '{{.Status}}'"
register: gitea_container_status
changed_when: false
- name: Restart Gitea container if not running
ansible.builtin.shell:
cmd: "podman restart {{ gitea_container_name }}"
when: "'Exited' in gitea_container_status.stdout"
become: true

View File

@ -0,0 +1,17 @@
services:
gitea:
image: gitea/gitea:{{ gitea_version }}
container_name: {{ gitea_container_name }}
environment:
- USER_UID=1000
- USER_GID=1000
- DB_TYPE=postgres
- DB_HOST={{ postgres_host }}:{{ postgres_port }}
- DB_NAME={{ postgres_db }}
- DB_USER={{ postgres_user }}
- DB_PASSWD={{ postgres_password }}
restart: always
volumes:
- {{ gitea_data_path }}:/data
ports:
- "{{ gitea_port }}:3000"

View File

@ -5,11 +5,11 @@ nginx_proxy_manager_data_path: "/opt/nginx-proxy-manager/data"
nginx_proxy_manager_letsencrypt_path: "/opt/nginx-proxy-manager/letsencrypt"
nginx_proxy_manager_compose_path: "/opt/nginx-proxy-manager/docker-compose.yml"
nginx_proxy_manager_admin_email: "tobend85@gmail.com"
nginx_proxy_manager_admin_password: "{{ vault_nginx_proxy_manager_admin_password }}"
nginx_proxy_manager_admin_password: "risICE3"
nginx_proxy_manager_port: "9900"
nginx_proxy_manager_ssl_port: "443"
# Docker network configuration
docker_network_name: "sublime-net"
# Podman network configuration
podman_network_name: "sublime-net"
# Wireguard-Easy container configuration
wireguard_easy_image: "ghcr.io/wg-easy/wg-easy"
wireguard_easy_version: "latest"
@ -18,5 +18,5 @@ wireguard_easy_admin_port: "51821"
wireguard_easy_data_dir: "/etc/wireguard"
wireguard_easy_config_dir: "/opt/network"
wireguard_easy_host: "130.162.231.152"
wireguard_easy_password: "{{ vault_wireguard_easy_password }}"
wireguard_easy_password: "admin"
wireguard_easy_password_hash: ""

View File

@ -0,0 +1,5 @@
- name: Reload firewalld
ansible.builtin.systemd:
name: firewalld
state: reloaded
become: true

View File

@ -1,89 +1,20 @@
- name: Update apt cache
apt:
update_cache: true
- name: Install WireGuard and required packages
apt:
name:
- wireguard
- wireguard-tools
- resolvconf
state: present
- name: Ensure WireGuard module is loaded
modprobe:
name: wireguard
state: present
- name: Enable IP forwarding
sysctl:
name: net.ipv4.ip_forward
value: '1'
state: present
- name: Ensure wireguard config directory exists
file:
path: "{{ wireguard_easy_config_dir }}"
state: directory
mode: '0755'
become: true
- name: Ensure WireGuard configuration file exists (optional)
file:
path: "{{ wireguard_easy_data_dir }}/wg0.conf"
state: touch
owner: root
group: root
mode: '0644'
- name: Ensure nginx data directory exists
file:
path: "{{ nginx_proxy_manager_data_path }}"
state: directory
mode: '0755'
become: true
- name: Copy Nginx configuration files
copy:
src: nginx/data
dest: "{{ nginx_proxy_manager_data_path }}"
owner: root
group: root
mode: '0644'
- name: Ensure Let's Encrypt directory exists
file:
path: "{{ nginx_proxy_manager_letsencrypt_path }}"
state: directory
mode: '0755'
become: true
- name: Copy Let's Encrypt files
copy:
src: nginx/letsencrypt
dest: "{{ nginx_proxy_manager_letsencrypt_path }}"
owner: root
group: root
mode: '0644'
notify: Restart Nginx
- name: Generate Docker Compose file for Wireguard and Nginx
- name: Generate Podman Compose file for Wireguard and Nginx
template:
src: docker-compose.j2
dest: /opt/network/docker-compose.yml
src: podman-compose.j2
dest: /opt/network/podman-compose.yml
owner: root
group: root
mode: '0644'
become: true
- name: Deploy Containers
community.docker.docker_compose_v2:
project_src: /opt/network
state: present
restart: true
- name: Open firewall ports for web traffic
ansible.posix.firewalld:
port: "{{ item }}"
permanent: true
state: enabled
zone: public
loop:
- 80/tcp
- 443/tcp
notify: Reload firewalld
become: true
- name: Ensure Nginx container is running
community.docker.docker_container_info:
name: "{{ nginx_proxy_manager_container_name }}"
register: nginx_container_info

View File

@ -23,7 +23,7 @@ services:
- net.ipv4.ip_forward=1
- net.ipv6.conf.all.disable_ipv6=0
networks:
- {{ docker_network_name }}
- {{ podman_network_name }}
restart: unless-stopped
nginx-proxy-manager:
@ -44,5 +44,5 @@ services:
- "{{ nginx_proxy_manager_letsencrypt_path }}:/etc/letsencrypt"
networks:
{{ docker_network_name }}:
{{ podman_network_name }}:
driver: bridge

View File

@ -1,36 +0,0 @@
# Ansible Role: Nextcloud Docker Compose
An Ansible role to deploy Nextcloud using Docker Compose.
## Requirements
- Docker
- Docker Compose
## Role Variables
Available variables are listed below, along with default values (see `defaults/main.yml`):
```yaml
nextcloud_data_dir: "/opt/nextcloud"
nextcloud_port: 8080
nextcloud_db_name: "nextcloud"
nextcloud_db_user: "nextcloud"
nextcloud_admin_user: "admin"
```
## Dependencies
- docker
## Example Playbook
```yaml
- hosts: "servers"
roles:
- role: "nextcloud"
```
## License
MIT

View File

@ -1,6 +0,0 @@
---
nextcloud_data_dir: "/opt/nextcloud"
nextcloud_port: 8080
nextcloud_db_name: "nextcloud"
nextcloud_db_user: "nextcloud"
nextcloud_admin_user: "admin"

View File

@ -1,12 +0,0 @@
galaxy_info:
author: "Your Name"
description: "An Ansible role to deploy Nextcloud using Docker Compose"
license: "MIT"
min_ansible_version: "2.9"
platforms:
- name: "Ubuntu"
versions:
- "focal"
- "bionic"
dependencies:
- role: docker

View File

@ -1,16 +0,0 @@
---
- name: "Create Nextcloud directory"
ansible.builtin.file:
path: "{{ nextcloud_data_dir }}"
state: "directory"
mode: "0755"
- name: "Create Nextcloud docker-compose.yml"
ansible.builtin.template:
src: "docker-compose.yml.j2"
dest: "{{ nextcloud_data_dir }}/docker-compose.yml"
- name: "Start Nextcloud services"
community.docker.docker_compose:
project_src: "{{ nextcloud_data_dir }}"
state: "present"

View File

@ -1,33 +0,0 @@
version: '3'
services:
db:
image: postgres
restart: always
volumes:
- db:/var/lib/postgresql/data
environment:
- POSTGRES_DB={{ nextcloud_db_name }}
- POSTGRES_USER={{ nextcloud_db_user }}
- POSTGRES_PASSWORD={{ vault_nextcloud_db_password }}
app:
image: nextcloud
restart: always
ports:
- "{{ nextcloud_port }}:80"
links:
- db
volumes:
- nextcloud:/var/www/html
environment:
- POSTGRES_HOST=db
- POSTGRES_DB={{ nextcloud_db_name }}
- POSTGRES_USER={{ nextcloud_db_user }}
- POSTGRES_PASSWORD={{ vault_nextcloud_db_password }}
- NEXTCLOUD_ADMIN_USER={{ nextcloud_admin_user }}
- NEXTCLOUD_ADMIN_PASSWORD={{ vault_nextcloud_admin_password }}
volumes:
db:
nextcloud:

View File

@ -1,34 +0,0 @@
# Ansible Role: Open WebUI Docker Compose
An Ansible role to deploy Open WebUI using Docker Compose.
## Requirements
- Docker
- Docker Compose
## Role Variables
Available variables are listed below, along with default values (see `defaults/main.yml`):
```yaml
openwebui_data_dir: "/opt/open-webui"
openwebui_port: 8080
openwebui_ollama_base_url: "http://localhost:11434"
```
## Dependencies
- docker
## Example Playbook
```yaml
- hosts: "servers"
roles:
- role: "openwebui"
```
## License
MIT

View File

@ -1,4 +0,0 @@
---
openwebui_data_dir: "/opt/open-webui"
openwebui_port: 8080
openwebui_ollama_base_url: "http://localhost:11434"

View File

@ -1,12 +0,0 @@
galaxy_info:
author: "Your Name"
description: "An Ansible role to deploy Open WebUI using Docker Compose"
license: "MIT"
min_ansible_version: "2.9"
platforms:
- name: "Ubuntu"
versions:
- "focal"
- "bionic"
dependencies:
- role: docker

View File

@ -1,16 +0,0 @@
---
- name: "Create Open WebUI directory"
ansible.builtin.file:
path: "{{ openwebui_data_dir }}"
state: "directory"
mode: "0755"
- name: "Create Open WebUI docker-compose.yml"
ansible.builtin.template:
src: "docker-compose.yml.j2"
dest: "{{ openwebui_data_dir }}/docker-compose.yml"
- name: "Start Open WebUI services"
community.docker.docker_compose:
project_src: "{{ openwebui_data_dir }}"
state: "present"

View File

@ -1,13 +0,0 @@
version: '3.8'
services:
open-webui:
image: ghcr.io/open-webui/open-webui:main
container_name: open-webui
ports:
- "{{ openwebui_port }}:8080"
volumes:
- "{{ openwebui_data_dir }}:/app/backend/data"
environment:
- OLLAMA_BASE_URL={{ openwebui_ollama_base_url }}
restart: always

View File

@ -0,0 +1,38 @@
---
- name: Install Podman
ansible.builtin.apt:
name: podman
state: present
- name: Install podman-compose
ansible.builtin.apt:
name: podman-compose
state: present
- name: Install podman-docker (optional, for docker command alias)
ansible.builtin.apt:
name: podman-docker
state: present
- name: Ensure user's Podman config directory exists
ansible.builtin.file:
path: "{{ ansible_user_dir }}/.config/containers"
state: directory
mode: '0755'
owner: "{{ ansible_user }}"
group: "{{ ansible_user }}"
become: true
- name: Configure unqualified image search registries for Podman (user-specific)
ansible.builtin.copy:
src: ../templates/registries.conf.j2
dest: "{{ ansible_user_dir }}/.config/containers/registries.conf"
owner: "{{ ansible_user }}"
group: "{{ ansible_user }}"
mode: '0644'
become: true
register: copy_registries_conf_output
- name: Display copy_registries_conf_output
debug:
var: copy_registries_conf_output

View File

@ -0,0 +1,5 @@
# This file is a template for /etc/containers/registries.conf
# It configures unqualified image search registries for Podman.
[registries.search]
registries = ['docker.io', 'registry.access.redhat.com', 'registry.redhat.io']

View File

@ -1,3 +1 @@
---
dependencies:
- role: docker

View File

@ -1,27 +1,22 @@
- name: Ensure Docker service is running
service:
name: docker
state: started
enabled: true
- name: Pull Portainer Docker image
community.docker.docker_image:
- name: Pull Portainer Podman image
containers.podman.podman_image:
name: portainer/portainer-ce
source: pull
- name: Create Portainer container
community.docker.docker_container:
containers.podman.podman_container:
name: portainer
image: portainer/portainer-ce
state: started
ports:
- "9000:9000"
volumes:
- "/var/run/docker.sock:/var/run/docker.sock"
- "/var/run/podman/podman.sock:/var/run/podman/podman.sock"
- "portainer_data:/data"
restart_policy: unless-stopped
- name: Ensure Portainer container is running
community.docker.docker_container:
containers.podman.podman_container:
name: portainer
state: started

View File

@ -0,0 +1,7 @@
---
postgres_container_name: postgres
postgres_port: 5432
postgres_user: tobi
postgres_password: risICE3
postgres_data_dir: /var/lib/postgresql/data/pgdata
postgres_volume: /opt/postgresData

View File

@ -0,0 +1,14 @@
---
- name: Run PostgreSQL Podman container
containers.podman.podman_container:
name: "{{ postgres_container_name }}"
image: postgres
state: started
ports:
- "{{ postgres_port }}:5432"
env:
POSTGRES_USER: "{{ postgres_user }}"
POSTGRES_PASSWORD: "{{ postgres_password }}"
PGDATA: "{{ postgres_data_dir }}"
volumes:
- "{{ postgres_volume }}:{{ postgres_data_dir }}"

View File

@ -0,0 +1,5 @@
- name: Reload firewalld
ansible.builtin.systemd:
name: firewalld
state: reloaded
become: true

View File

@ -0,0 +1,8 @@
- name: Open firewall port for Wireguard
ansible.posix.firewalld:
port: 51820/udp
permanent: true
state: enabled
zone: public
notify: Reload firewalld
become: true

View File

@ -0,0 +1,116 @@
# If you come from bash you might have to change your $PATH.
# export PATH=$HOME/bin:$HOME/.local/bin:/usr/local/bin:$PATH
# Path to your Oh My Zsh installation.
export ZSH="$HOME/.oh-my-zsh"
#ZSH_THEME="powerlevel9k/powerlevel9k"
#POWERLEVEL9K_MODE="nerdfont-complete"
#source $ZSH/themes/powerlevel9k/powerlevel9k.zsh-theme
# Set name of the theme to load --- if set to "random", it will
# load a random theme each time Oh My Zsh is loaded, in which case,
# to know which specific one was loaded, run: echo $RANDOM_THEME
# See https://github.com/ohmyzsh/ohmyzsh/wiki/Themes
ZSH_THEME="robbyrussell"
# Set list of themes to pick from when loading at random
# Setting this variable when ZSH_THEME=random will cause zsh to load
# a theme from this variable instead of looking in $ZSH/themes/
# If set to an empty array, this variable will have no effect.
# ZSH_THEME_RANDOM_CANDIDATES=( "robbyrussell" "agnoster" )
# Uncomment the following line to use case-sensitive completion.
CASE_SENSITIVE="false"
# Uncomment the following line to use hyphen-insensitive completion.
# Case-sensitive completion must be off. _ and - will be interchangeable.
HYPHEN_INSENSITIVE="true"
# Uncomment one of the following lines to change the auto-update behavior
# zstyle ':omz:update' mode disabled # disable automatic updates
# zstyle ':omz:update' mode auto # update automatically without asking
# zstyle ':omz:update' mode reminder # just remind me to update when it's time
# Uncomment the following line to change how often to auto-update (in days).
# zstyle ':omz:update' frequency 13
# Uncomment the following line if pasting URLs and other text is messed up.
# DISABLE_MAGIC_FUNCTIONS="true"
# Uncomment the following line to disable colors in ls.
# DISABLE_LS_COLORS="true"
# Uncomment the following line to disable auto-setting terminal title.
# DISABLE_AUTO_TITLE="true"
# Uncomment the following line to enable command auto-correction.
# ENABLE_CORRECTION="true"
# Uncomment the following line to display red dots whilst waiting for completion.
# You can also set it to another string to have that shown instead of the default red dots.
# e.g. COMPLETION_WAITING_DOTS="%F{yellow}waiting...%f"
# Caution: this setting can cause issues with multiline prompts in zsh < 5.7.1 (see #5765)
# COMPLETION_WAITING_DOTS="true"
# Uncomment the following line if you want to disable marking untracked files
# under VCS as dirty. This makes repository status check for large repositories
# much, much faster.
# DISABLE_UNTRACKED_FILES_DIRTY="true"
# Uncomment the following line if you want to change the command execution time
# stamp shown in the history command output.
# You can set one of the optional three formats:
# "mm/dd/yyyy"|"dd.mm.yyyy"|"yyyy-mm-dd"
# or set a custom format using the strftime function format specifications,
# see 'man strftime' for details.
HIST_STAMPS="dd.mm.yyyy"
# Would you like to use another custom folder than $ZSH/custom?
# ZSH_CUSTOM=/path/to/new-custom-folder
# Which plugins would you like to load?
# Standard plugins can be found in $ZSH/plugins/
# Custom plugins may be added to $ZSH_CUSTOM/plugins/
# Example format: plugins=(rails git textmate ruby lighthouse)
# Add wisely, as too many plugins slow down shell startup.
plugins=(git zsh-syntax-highlighting)
source $ZSH/oh-my-zsh.sh
# User configuration
# export MANPATH="/usr/local/man:$MANPATH"
# You may need to manually set your language environment
# export LANG=en_US.UTF-8
# Preferred editor for local and remote sessions
# if [[ -n $SSH_CONNECTION ]]; then
# export EDITOR='vim'
# else
# export EDITOR='nvim'
# fi
# Compilation flags
# export ARCHFLAGS="-arch $(uname -m)"
# Set personal aliases, overriding those provided by Oh My Zsh libs,
# plugins, and themes. Aliases can be placed here, though Oh My Zsh
# users are encouraged to define aliases within a top-level file in
# the $ZSH_CUSTOM folder, with .zsh extension. Examples:
# - $ZSH_CUSTOM/aliases.zsh
# - $ZSH_CUSTOM/macos.zsh
# For a full list of active aliases, run `alias`.
#
# Example aliases
alias zshconfig="nano ~/.zshrc"
#alias ls="colorls"
#function cd { builtin cd "$@" && colorls }
#PATH=$PATH:~/.local/share/gem/ruby/3.3.0/bin
alias cat="batcat"
alias top="htop"
archey
ls

View File

@ -0,0 +1,3 @@
---
dependencies:
- role: zsh_with_style/subroles/zsh

View File

@ -0,0 +1,33 @@
- name: Check if Oh My Zsh is already installed
stat:
path: "{{ user_home }}/.oh-my-zsh"
register: oh_my_zsh_installed
notify: Debug Oh My Zsh installation status
- name: Debug Oh My Zsh installation status
debug:
msg: "Oh My Zsh is {{ 'installed' if oh_my_zsh_installed.stat.exists else 'not installed' }}"
when: oh_my_zsh_installed is defined
- name: Download Oh My Zsh install script using wget
get_url:
url: https://raw.githubusercontent.com/ohmyzsh/ohmyzsh/master/tools/install.sh
dest: /tmp/install-ohmyzsh.sh
mode: '0755' # Makes it executable
when: not oh_my_zsh_installed.stat.exists
- name: Install Oh My Zsh for the current user
shell: |
RUNZSH=no CHSH=no sh /tmp/install-ohmyzsh.sh
args:
creates: "{{ user_home }}/.oh-my-zsh"
when: not oh_my_zsh_installed.stat.exists
become: true
become_user: "{{ ansible_user }}"
- name: Clone zsh-syntax-highlighting repository
git:
repo: 'https://github.com/zsh-users/zsh-syntax-highlighting.git'
dest: "{{ user_home }}/.oh-my-zsh/plugins/zsh-syntax-highlighting"
version: master

View File

@ -0,0 +1,2 @@
---
# No dependencies for this subrole

View File

@ -0,0 +1,14 @@
---
- name: Install zsh
apt:
name: zsh
state: present
- name: Set zsh as the default shell
shell: chsh -s $(which zsh) {{ ansible_user }}
become: true
when: ansible_user != "root"
register: chsh_result
failed_when: chsh_result.rc != 0
changed_when: false

View File

@ -0,0 +1,91 @@
---
- name: Set home directory for the user
set_fact:
user_home: "/home/{{ ansible_user }}"
- name: Set up Zsh
include_role:
name: zsh_with_style/subroles/zsh
- name: Set up Oh My Zsh
include_role:
name: zsh_with_style/subroles/ohmyzsh
- name: Install bat
apt:
name: bat
state: present
update_cache: true
become: true
- name: Ensure ~/.local/bin directory exists
file:
path: "{{ user_home }}/.local/bin"
state: directory
mode: '0755'
- name: Copy the archey 4 .deb package to the remote host
copy:
src: archey4_4.15.0.0-1_all.deb # Name of the .deb file in the `files` folder
dest: /tmp/archey4_4.15.0.0-1_all.deb
mode: '0644'
- name: Install archey 4
apt:
deb: /tmp/archey4_4.15.0.0-1_all.deb
state: present
become: true
- name: Create symlink from batcat to bat
file:
src: /usr/bin/batcat
dest: "{{ user_home }}/.local/bin/bat"
state: link
- name: Deploy custom .zshrc file
copy:
src: .zshrc
dest: ~{{ ansible_user }}/.zshrc
owner: "{{ ansible_user }}"
group: "{{ ansible_user }}"
mode: '0644'
when: ansible_user != "root"
- name: Source .zshrc to apply changes
shell: |
source {{ user_home }}/.zshrc
args:
executable: /bin/zsh
become_user: "{{ ansible_user }}"
become: true
changed_when: false
# - name: Ensure Ruby is installed
# apt:
# name: ruby
# state: present
# become: yes
# - name: Get Ruby version
# command: ruby -e 'puts RUBY_VERSION'
# register: ruby_version_output
# become: yes
# - name: Set Ruby version fact
# set_fact:
# ruby_version: "{{ ruby_version_output.stdout }}"
# # - name: Ensure gem binary directory is in the user's PATH
# # lineinfile:
# # path: "{{ ansible_user_dir }}/.zshrc"
# # line: 'export PATH="$HOME/.local/share/gem/ruby/{{ ruby_version }}/bin:$PATH"'
# # create: yes
# # become: yes
# # become_user: "{{ ansible_user }}"
# - name: Install colorls gem for the current user
# gem:
# name: colorls
# become: yes
# become_user: "{{ ansible_user }}"