initial
This commit is contained in:
9
Ansible/.yamllint
Normal file
9
Ansible/.yamllint
Normal file
@@ -0,0 +1,9 @@
|
||||
---
|
||||
extends: default
|
||||
|
||||
rules:
|
||||
line-length:
|
||||
max: 120
|
||||
level: warning
|
||||
truthy:
|
||||
allowed-values: ['true', 'false', 'yes', 'no']
|
||||
205
Ansible/ANSIBLE_REVIEW_2025.md
Normal file
205
Ansible/ANSIBLE_REVIEW_2025.md
Normal file
@@ -0,0 +1,205 @@
|
||||
# Ansible Setup Review - October 2025
|
||||
|
||||
## Summary
|
||||
|
||||
The Ansible configuration is ~3 years old and needed updates for deprecated syntax and package repositories. Critical issues have been fixed, but some improvements are recommended.
|
||||
|
||||
## Issues Fixed
|
||||
|
||||
### ✅ Critical Fixes Applied
|
||||
|
||||
1. **Updated K3s Version**
|
||||
- File: `roles/containers/defaults/main.yml`
|
||||
- Changed: `v1.26.0-rc1+k3s1` → `v1.29.0+k3s1`
|
||||
- Reason: Was using a 3-year-old release candidate
|
||||
|
||||
2. **Removed Deprecated Kubernetes APT Repository**
|
||||
- File: `roles/system/tasks/essential.yml`
|
||||
- Removed: Google Cloud apt repository configuration
|
||||
- Reason: Repository deprecated, K3s provides its own kubectl
|
||||
|
||||
3. **Updated Ansible Syntax**
|
||||
- Files: `roles/system/tasks/main.yml`, `roles/containers/tasks/main.yml`
|
||||
- Changed: `include_tasks` → `ansible.builtin.include_tasks`
|
||||
- Reason: Bare `include_tasks` deprecated in Ansible 2.10+
|
||||
|
||||
## Recommended Improvements
|
||||
|
||||
### 🟡 Medium Priority
|
||||
|
||||
1. **Enable Fact Gathering**
|
||||
- File: `setup_home_server.yml`
|
||||
- Current: `gather_facts: no`
|
||||
- Recommendation: Enable facts for better Ansible module compatibility
|
||||
- Modern Ansible handles fact gathering efficiently
|
||||
|
||||
2. **Update Ansible Collection Syntax**
|
||||
- Several files still use short-form module names
|
||||
- Recommendation: Use FQCN (Fully Qualified Collection Names)
|
||||
- Example: `apt` → `ansible.builtin.apt`
|
||||
|
||||
3. **Add Smartmontools to Essential Packages**
|
||||
- Current: Not in default package list
|
||||
- Recommendation: Add to `extra_packages` in defaults
|
||||
- Already installed manually, should be in automation
|
||||
|
||||
## Current State
|
||||
|
||||
### Working Configuration
|
||||
- **K3s Version**: v1.29.0+k3s1 (running)
|
||||
- **Kubernetes**: 1.29.0
|
||||
- **OS**: Ubuntu 22.04.4 LTS
|
||||
- **Kernel**: 5.15.0-160-generic
|
||||
|
||||
### Inventory
|
||||
```ini
|
||||
[x86]
|
||||
kimchi ansible_user=tas ansible_host=192.168.178.55
|
||||
|
||||
[ARM]
|
||||
pi-one ansible_user=pi ansible_host=192.168.178.11
|
||||
```
|
||||
|
||||
### Active Roles
|
||||
- ✅ **system**: System configuration and package management
|
||||
- ✅ **neovim**: Developer environment setup
|
||||
- ✅ **containers**: K3s installation and configuration
|
||||
- ❌ **geerlingguy.security**: Commented out (could be useful)
|
||||
- ❌ **tailscale**: Disabled by default
|
||||
|
||||
## Testing Recommendations
|
||||
|
||||
Before running Ansible again:
|
||||
|
||||
1. **Install Ansible** (currently not installed on kimchi):
|
||||
```bash
|
||||
sudo apt install ansible
|
||||
```
|
||||
|
||||
2. **Test Syntax**:
|
||||
```bash
|
||||
ansible-playbook --syntax-check setup_home_server.yml
|
||||
```
|
||||
|
||||
3. **Dry Run**:
|
||||
```bash
|
||||
ansible-playbook -i inventory.ini setup_home_server.yml --check --diff
|
||||
```
|
||||
|
||||
4. **Run with Tags**:
|
||||
```bash
|
||||
# Test only system tasks
|
||||
ansible-playbook -i inventory.ini setup_home_server.yml --tags system --check
|
||||
|
||||
# Run only containers (K3s) tasks
|
||||
ansible-playbook -i inventory.ini setup_home_server.yml --tags containers
|
||||
```
|
||||
|
||||
## Suggested Package Additions
|
||||
|
||||
Add to `roles/system/defaults/main.yml`:
|
||||
|
||||
```yaml
|
||||
extra_packages:
|
||||
- vim
|
||||
- git
|
||||
- curl
|
||||
- wget
|
||||
- htop
|
||||
- smartmontools # NEW - Drive health monitoring
|
||||
- rsync # NEW - Used by backup scripts
|
||||
- tmux
|
||||
- tree
|
||||
- jq
|
||||
```
|
||||
|
||||
## Integration with New Automation
|
||||
|
||||
The following systemd services/timers were added manually and could be integrated into Ansible:
|
||||
|
||||
### K3s Maintenance
|
||||
- **Script**: `/usr/local/bin/k3s-maintenance.sh`
|
||||
- **Service**: `/etc/systemd/system/k3s-maintenance.{service,timer}`
|
||||
- **Schedule**: Quarterly
|
||||
- **Recommendation**: Add to `roles/containers/tasks/` for automation
|
||||
|
||||
### Backup System
|
||||
- **Script**: `/usr/local/bin/backup-mirror-sync.sh`
|
||||
- **Service**: `/etc/systemd/system/backup-mirror-sync.{service,timer}`
|
||||
- **Schedule**: Weekly Sundays
|
||||
- **Recommendation**: Create new `roles/backup/` role
|
||||
|
||||
### SMART Monitoring
|
||||
- **Config**: `/etc/smartd.conf`
|
||||
- **Service**: `smartmontools` (system package)
|
||||
- **Recommendation**: Add smartd configuration task to `roles/system/`
|
||||
|
||||
## Future Ansible Playbook Ideas
|
||||
|
||||
### 1. Backup Configuration Playbook
|
||||
```yaml
|
||||
# backup_config.yml
|
||||
- hosts: kimchi
|
||||
become: yes
|
||||
roles:
|
||||
- backup
|
||||
```
|
||||
|
||||
### 2. Monitoring Setup Playbook
|
||||
```yaml
|
||||
# monitoring.yml
|
||||
- hosts: kimchi
|
||||
become: yes
|
||||
roles:
|
||||
- smartmontools
|
||||
- prometheus # Future: Add metrics
|
||||
```
|
||||
|
||||
### 3. Security Hardening
|
||||
```yaml
|
||||
# harden.yml
|
||||
- hosts: kimchi
|
||||
become: yes
|
||||
roles:
|
||||
- geerlingguy.security
|
||||
- ufw_firewall # Could add firewall rules
|
||||
```
|
||||
|
||||
## Compatibility Matrix
|
||||
|
||||
| Component | Current Version | Ansible Compatibility |
|
||||
|-----------|----------------|----------------------|
|
||||
| Ansible | Not installed (run from laptop) | 2.9+ required |
|
||||
| Python | 3.10.6 | ✅ Compatible |
|
||||
| Ubuntu | 22.04.4 LTS | ✅ Compatible |
|
||||
| K3s | v1.29.0+k3s1 | ✅ Updated in config |
|
||||
| systemd | 249 | ✅ Compatible |
|
||||
|
||||
## Next Steps
|
||||
|
||||
1. ✅ **Critical fixes applied** - Ansible should now work without errors
|
||||
2. ⏭️ **Test playbook** - Run with `--check` before applying
|
||||
3. ⏭️ **Consider adding:**
|
||||
- Backup role for `/usr/local/bin/backup-mirror-sync.sh`
|
||||
- Maintenance role for `/usr/local/bin/k3s-maintenance.sh`
|
||||
- SMART monitoring configuration
|
||||
4. ⏭️ **Optional improvements:**
|
||||
- Enable fact gathering
|
||||
- Use FQCN for all modules
|
||||
- Add security hardening role
|
||||
|
||||
## Migration Notes
|
||||
|
||||
The server is **currently working well** with:
|
||||
- Manual systemd service configuration
|
||||
- Automated certificate rotation
|
||||
- Automated backups
|
||||
- SMART monitoring
|
||||
|
||||
**Recommendation**: Don't run Ansible unless you need to:
|
||||
- Rebuild the server from scratch
|
||||
- Add/remove system packages
|
||||
- Update K3s version
|
||||
- Replicate configuration to new nodes
|
||||
|
||||
The current manual configurations work reliably and are well-documented in CLAUDE.md and STORAGE.md.
|
||||
294
Ansible/MAINTENANCE_ROLES_README.md
Normal file
294
Ansible/MAINTENANCE_ROLES_README.md
Normal file
@@ -0,0 +1,294 @@
|
||||
# Ansible Maintenance Automation Roles
|
||||
|
||||
This document describes the Ansible roles and playbooks for setting up automated maintenance, backups, and monitoring on the home server.
|
||||
|
||||
## Overview
|
||||
|
||||
The maintenance automation consists of three main roles:
|
||||
|
||||
1. **maintenance** - Quarterly system maintenance (container cleanup, log cleanup, apt cleanup)
|
||||
2. **backup** - Weekly backup mirror synchronization with power management
|
||||
3. **smart_monitoring** - Drive health monitoring with SMART
|
||||
|
||||
## Quick Start
|
||||
|
||||
### Run Everything
|
||||
|
||||
```bash
|
||||
# Dry run (check mode)
|
||||
ansible-playbook -i inventory.ini setup_maintenance.yml --check --diff
|
||||
|
||||
# Apply configuration
|
||||
ansible-playbook -i inventory.ini setup_maintenance.yml
|
||||
```
|
||||
|
||||
### Run Specific Components
|
||||
|
||||
```bash
|
||||
# Only setup backups
|
||||
ansible-playbook -i inventory.ini setup_maintenance.yml --tags backup
|
||||
|
||||
# Only setup SMART monitoring
|
||||
ansible-playbook -i inventory.ini setup_maintenance.yml --tags smart
|
||||
|
||||
# Only setup quarterly maintenance
|
||||
ansible-playbook -i inventory.ini setup_maintenance.yml --tags maintenance
|
||||
|
||||
# Check bcache status
|
||||
ansible-playbook -i inventory.ini setup_maintenance.yml --tags bcache
|
||||
|
||||
# Show timer and service status
|
||||
ansible-playbook -i inventory.ini setup_maintenance.yml --tags status
|
||||
```
|
||||
|
||||
## Roles
|
||||
|
||||
### 1. Maintenance Role
|
||||
|
||||
**Location**: `roles/maintenance/`
|
||||
|
||||
**Purpose**: Sets up quarterly system maintenance automation
|
||||
|
||||
**What it configures:**
|
||||
- `/usr/local/bin/k3s-maintenance.sh` - Maintenance script
|
||||
- Systemd timer running quarterly (Jan/Apr/Jul/Oct 1 at 3:00 AM)
|
||||
- Logs to `/var/log/k3s-maintenance.log`
|
||||
|
||||
**Tasks performed by maintenance script:**
|
||||
- Prunes unused container images
|
||||
- Cleans journal logs (keeps 30 days)
|
||||
- Runs `apt autoremove` and `apt autoclean`
|
||||
- Logs disk usage
|
||||
|
||||
**Customization:**
|
||||
|
||||
Edit `roles/maintenance/defaults/main.yml`:
|
||||
```yaml
|
||||
maintenance_schedule: "*-01,04,07,10-01 03:00:00" # Quarterly schedule
|
||||
```
|
||||
|
||||
### 2. Backup Role
|
||||
|
||||
**Location**: `roles/backup/`
|
||||
|
||||
**Purpose**: Sets up weekly backup synchronization with power management
|
||||
|
||||
**What it configures:**
|
||||
- `/usr/local/bin/backup-mirror-sync.sh` - Backup sync script
|
||||
- Mounts backup drive at `/mnt/backup-mirror`
|
||||
- Systemd timer running weekly (Sundays at 2:00 AM)
|
||||
- udev rule for automatic drive spindown (10 minutes)
|
||||
- Logs to `/var/log/backup-mirror-sync.log`
|
||||
|
||||
**Features:**
|
||||
- Rsync-based incremental backups
|
||||
- Automatic drive spinup/spindown for energy savings
|
||||
- SMART health check during backup (while drive is active)
|
||||
- Drive temperature monitoring
|
||||
|
||||
**Customization:**
|
||||
|
||||
Edit `roles/backup/defaults/main.yml`:
|
||||
```yaml
|
||||
backup_schedule: "Sun *-*-* 02:00:00" # Weekly schedule
|
||||
backup_drive_spindown_timeout: 120 # 10 minutes
|
||||
backup_drive_serial: ZW60BTBJ # Match your drive
|
||||
backup_drive_uuid: "your-uuid-here" # Get from blkid
|
||||
```
|
||||
|
||||
**Important**: Update `backup_drive_uuid` if using a different backup drive!
|
||||
|
||||
### 3. SMART Monitoring Role
|
||||
|
||||
**Location**: `roles/smart_monitoring/`
|
||||
|
||||
**Purpose**: Configures drive health monitoring
|
||||
|
||||
**What it configures:**
|
||||
- Installs `smartmontools` package
|
||||
- Configures `/etc/smartd.conf`
|
||||
- Monitors `/dev/sdb` and `/dev/nvme0n1` continuously
|
||||
- Excludes `/dev/sda` (backup drive) from continuous monitoring
|
||||
|
||||
**Monitoring features:**
|
||||
- Daily short self-tests at 2:00 AM
|
||||
- Weekly long self-tests on Saturdays at 3:00 AM
|
||||
- Temperature monitoring with configurable thresholds
|
||||
- Automatic notifications via syslog
|
||||
|
||||
**Customization:**
|
||||
|
||||
Edit `roles/smart_monitoring/defaults/main.yml`:
|
||||
```yaml
|
||||
smart_active_drives:
|
||||
- device: /dev/sdb
|
||||
type: hdd
|
||||
temp_warn: 45
|
||||
temp_crit: 50
|
||||
- device: /dev/nvme0n1
|
||||
type: nvme
|
||||
temp_warn: 60
|
||||
temp_crit: 65
|
||||
```
|
||||
|
||||
## Additional Tasks
|
||||
|
||||
### Bcache Health Check
|
||||
|
||||
**Location**: `tasks/bcache_check.yml`
|
||||
|
||||
**Purpose**: Checks bcache status and re-attaches cache if detached
|
||||
|
||||
**Usage:**
|
||||
```bash
|
||||
ansible-playbook -i inventory.ini setup_maintenance.yml --tags bcache
|
||||
```
|
||||
|
||||
**What it does:**
|
||||
- Checks if bcache device exists
|
||||
- Verifies cache is attached
|
||||
- Re-attaches cache if detached
|
||||
- Shows cache hit/miss statistics
|
||||
|
||||
## File Structure
|
||||
|
||||
```
|
||||
Ansible/
|
||||
├── setup_maintenance.yml # Main playbook
|
||||
├── tasks/
|
||||
│ └── bcache_check.yml # Bcache health check
|
||||
└── roles/
|
||||
├── maintenance/
|
||||
│ ├── defaults/
|
||||
│ │ └── main.yml # Default variables
|
||||
│ ├── files/
|
||||
│ │ └── k3s-maintenance.sh # Maintenance script
|
||||
│ ├── templates/
|
||||
│ │ ├── k3s-maintenance.service.j2
|
||||
│ │ └── k3s-maintenance.timer.j2
|
||||
│ ├── tasks/
|
||||
│ │ └── main.yml # Role tasks
|
||||
│ └── handlers/
|
||||
│ └── main.yml # Systemd reload handler
|
||||
├── backup/
|
||||
│ ├── defaults/
|
||||
│ │ └── main.yml # Default variables
|
||||
│ ├── files/
|
||||
│ │ └── backup-mirror-sync.sh
|
||||
│ ├── templates/
|
||||
│ │ ├── 99-backup-drive-power.rules.j2
|
||||
│ │ ├── backup-mirror-sync.service.j2
|
||||
│ │ └── backup-mirror-sync.timer.j2
|
||||
│ ├── tasks/
|
||||
│ │ └── main.yml # Role tasks
|
||||
│ └── handlers/
|
||||
│ └── main.yml # Systemd/udev reload handlers
|
||||
└── smart_monitoring/
|
||||
├── defaults/
|
||||
│ └── main.yml # Default variables
|
||||
├── templates/
|
||||
│ └── smartd.conf.j2 # SMART config template
|
||||
├── tasks/
|
||||
│ └── main.yml # Role tasks
|
||||
└── handlers/
|
||||
└── main.yml # Smartmontools restart handler
|
||||
```
|
||||
|
||||
## Verification
|
||||
|
||||
After running the playbook, verify everything is configured:
|
||||
|
||||
```bash
|
||||
# Check timers are scheduled
|
||||
systemctl list-timers
|
||||
|
||||
# Verify maintenance timer
|
||||
systemctl status k3s-maintenance.timer
|
||||
|
||||
# Verify backup timer
|
||||
systemctl status backup-mirror-sync.timer
|
||||
|
||||
# Check SMART monitoring
|
||||
systemctl status smartmontools
|
||||
|
||||
# Check bcache status
|
||||
cat /sys/block/bcache0/bcache/state
|
||||
|
||||
# Check backup drive power state
|
||||
sudo hdparm -C /dev/sda
|
||||
```
|
||||
|
||||
## Logs
|
||||
|
||||
All automation logs are centralized:
|
||||
|
||||
- **Maintenance**: `/var/log/k3s-maintenance.log`
|
||||
- **Backup**: `/var/log/backup-mirror-sync.log`
|
||||
- **Certificate Rotation**: `/var/log/k3s-cert-rotation.log`
|
||||
- **SMART**: `journalctl -u smartmontools`
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Backup drive UUID mismatch
|
||||
|
||||
If you get mount errors:
|
||||
|
||||
```bash
|
||||
# Get the correct UUID
|
||||
sudo blkid /dev/sda1
|
||||
|
||||
# Update roles/backup/defaults/main.yml with the correct UUID
|
||||
```
|
||||
|
||||
### SMART monitoring not starting
|
||||
|
||||
```bash
|
||||
# Check syntax
|
||||
sudo smartd -c /etc/smartd.conf
|
||||
|
||||
# View errors
|
||||
sudo journalctl -u smartmontools -n 50
|
||||
```
|
||||
|
||||
### Timer not running
|
||||
|
||||
```bash
|
||||
# Reload systemd
|
||||
sudo systemctl daemon-reload
|
||||
|
||||
# Re-enable timer
|
||||
sudo systemctl enable --now k3s-maintenance.timer
|
||||
```
|
||||
|
||||
## Integration with Existing Setup
|
||||
|
||||
This playbook is designed to work alongside the existing `setup_home_server.yml`:
|
||||
|
||||
```bash
|
||||
# Full server setup (system + K3s + maintenance)
|
||||
ansible-playbook -i inventory.ini setup_home_server.yml
|
||||
ansible-playbook -i inventory.ini setup_maintenance.yml
|
||||
|
||||
# Or add to setup_home_server.yml by including these roles
|
||||
```
|
||||
|
||||
## Customization Tips
|
||||
|
||||
1. **Change backup schedule**: Edit `roles/backup/defaults/main.yml`
|
||||
2. **Add more drives to SMART**: Edit `roles/smart_monitoring/defaults/main.yml`
|
||||
3. **Adjust spindown timeout**: Edit `roles/backup/defaults/main.yml`
|
||||
4. **Change maintenance frequency**: Edit `roles/maintenance/defaults/main.yml`
|
||||
|
||||
## Idempotency
|
||||
|
||||
All roles are idempotent - you can run the playbook multiple times safely. It will only make changes if configuration has drifted from the desired state.
|
||||
|
||||
## Testing
|
||||
|
||||
Always test with `--check` first:
|
||||
|
||||
```bash
|
||||
ansible-playbook -i inventory.ini setup_maintenance.yml --check --diff
|
||||
```
|
||||
|
||||
This shows what would change without actually making changes.
|
||||
12
Ansible/ansible.cfg
Normal file
12
Ansible/ansible.cfg
Normal file
@@ -0,0 +1,12 @@
|
||||
[defaults]
|
||||
nocows = True
|
||||
roles_path = ./roles
|
||||
inventory = ./inventory.ini
|
||||
|
||||
remote_tmp = $HOME/.ansible/tmp
|
||||
local_tmp = $HOME/.ansible/tmp
|
||||
pipelining = True
|
||||
become = True
|
||||
host_key_checking = False
|
||||
deprecation_warnings = False
|
||||
callback_whitelist = profile_tasks
|
||||
9
Ansible/generate_autoinstall_iso.yml
Normal file
9
Ansible/generate_autoinstall_iso.yml
Normal file
@@ -0,0 +1,9 @@
|
||||
---
|
||||
- hosts: localhost
|
||||
gather_facts: yes
|
||||
become: no
|
||||
|
||||
roles:
|
||||
- ubuntu_autoinstall
|
||||
|
||||
|
||||
5
Ansible/inventory.ini
Normal file
5
Ansible/inventory.ini
Normal file
@@ -0,0 +1,5 @@
|
||||
[x86]
|
||||
kimchi ansible_user=tas ansible_host=192.168.178.55
|
||||
|
||||
[ARM]
|
||||
pi-one ansible_user=pi ansible_host=192.168.178.11
|
||||
20
Ansible/roles/backup/defaults/main.yml
Normal file
20
Ansible/roles/backup/defaults/main.yml
Normal file
@@ -0,0 +1,20 @@
|
||||
---
|
||||
# Backup configuration
|
||||
backup_source: /mnt/bcache/
|
||||
backup_destination: /mnt/backup-mirror/
|
||||
backup_script_path: /usr/local/bin/backup-mirror-sync.sh
|
||||
backup_log_path: /var/log/backup-mirror-sync.log
|
||||
|
||||
# Backup drive configuration
|
||||
backup_drive: /dev/sda
|
||||
backup_drive_serial: ZW60BTBJ # Seagate IronWolf serial number
|
||||
backup_drive_partition: /dev/sda1
|
||||
backup_drive_uuid: "794c73e6-9a27-444e-865c-c090ef40bf38"
|
||||
backup_drive_label: backup-mirror
|
||||
|
||||
# Power management
|
||||
backup_drive_spindown_timeout: 120 # 10 minutes (120 * 5 seconds)
|
||||
|
||||
# Systemd timer configuration
|
||||
# Run weekly on Sundays at 2:00 AM
|
||||
backup_schedule: "Sun *-*-* 02:00:00"
|
||||
136
Ansible/roles/backup/files/backup-mirror-sync.sh
Executable file
136
Ansible/roles/backup/files/backup-mirror-sync.sh
Executable file
@@ -0,0 +1,136 @@
|
||||
#!/bin/bash
|
||||
|
||||
# Backup Mirror Sync Script
|
||||
# Syncs data from /mnt/bcache to /mnt/backup-mirror
|
||||
|
||||
set -euo pipefail
|
||||
|
||||
LOGFILE="/var/log/backup-mirror-sync.log"
|
||||
SOURCE="/mnt/bcache/"
|
||||
DESTINATION="/mnt/backup-mirror/"
|
||||
LOCKFILE="/var/run/backup-mirror-sync.lock"
|
||||
|
||||
# Function to log messages
|
||||
log() {
|
||||
echo "$(date '+%Y-%m-%d %H:%M:%S') - $1" | tee -a "$LOGFILE"
|
||||
}
|
||||
|
||||
# Function to send notification
|
||||
notify() {
|
||||
local status="$1"
|
||||
local message="$2"
|
||||
log "NOTIFICATION [$status]: $message"
|
||||
logger -t "backup-mirror-sync" "[$status] $message"
|
||||
}
|
||||
|
||||
# Check if already running
|
||||
if [ -f "$LOCKFILE" ]; then
|
||||
log "ERROR: Backup sync already running (lockfile exists)"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Create lockfile
|
||||
trap "rm -f $LOCKFILE" EXIT
|
||||
touch "$LOCKFILE"
|
||||
|
||||
# Verify source and destination exist
|
||||
if [ ! -d "$SOURCE" ]; then
|
||||
log "ERROR: Source directory $SOURCE does not exist"
|
||||
notify "ERROR" "Source directory missing"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
if [ ! -d "$DESTINATION" ]; then
|
||||
log "ERROR: Destination directory $DESTINATION does not exist"
|
||||
notify "ERROR" "Destination directory missing"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Check if destination is mounted
|
||||
if ! mountpoint -q "$DESTINATION"; then
|
||||
log "ERROR: Destination $DESTINATION is not mounted"
|
||||
notify "ERROR" "Backup drive not mounted"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
log "=== Starting backup sync ==="
|
||||
log "Source: $SOURCE"
|
||||
log "Destination: $DESTINATION"
|
||||
|
||||
# Check if backup drive is spun down and wake it up
|
||||
BACKUP_DRIVE="/dev/sda"
|
||||
DRIVE_STATE=$(hdparm -C "$BACKUP_DRIVE" 2>/dev/null | grep "drive state" | awk '{print $NF}')
|
||||
log "Backup drive state: $DRIVE_STATE"
|
||||
|
||||
if [ "$DRIVE_STATE" = "standby" ] || [ "$DRIVE_STATE" = "sleeping" ]; then
|
||||
log "Backup drive is in standby/sleep mode, waking it up..."
|
||||
# Access the mount point to trigger spinup
|
||||
ls "$DESTINATION" > /dev/null 2>&1
|
||||
# Wait for drive to fully spin up (typically 5-10 seconds)
|
||||
sleep 10
|
||||
log "Drive spinup complete"
|
||||
fi
|
||||
|
||||
# Get sizes before sync
|
||||
SOURCE_SIZE=$(du -sh "$SOURCE" 2>/dev/null | cut -f1)
|
||||
DEST_SIZE_BEFORE=$(du -sh "$DESTINATION" 2>/dev/null | cut -f1)
|
||||
|
||||
log "Source size: $SOURCE_SIZE"
|
||||
log "Destination size (before): $DEST_SIZE_BEFORE"
|
||||
|
||||
# Perform rsync
|
||||
# Options explained:
|
||||
# -a: archive mode (recursive, preserve permissions, times, etc.)
|
||||
# -v: verbose
|
||||
# -h: human-readable
|
||||
# --delete: delete files in destination that don't exist in source
|
||||
# --delete-excluded: delete excluded files from destination
|
||||
# --stats: show transfer statistics
|
||||
# --progress: show progress during transfer
|
||||
|
||||
START_TIME=$(date +%s)
|
||||
|
||||
if rsync -avh --delete --delete-excluded \
|
||||
--exclude='lost+found' \
|
||||
--stats \
|
||||
"$SOURCE" "$DESTINATION" >> "$LOGFILE" 2>&1; then
|
||||
|
||||
END_TIME=$(date +%s)
|
||||
DURATION=$((END_TIME - START_TIME))
|
||||
|
||||
DEST_SIZE_AFTER=$(du -sh "$DESTINATION" 2>/dev/null | cut -f1)
|
||||
|
||||
log "Backup sync completed successfully"
|
||||
log "Duration: ${DURATION}s"
|
||||
log "Destination size (after): $DEST_SIZE_AFTER"
|
||||
|
||||
# Check disk usage
|
||||
DEST_USAGE=$(df -h "$DESTINATION" | tail -1 | awk '{print $5}')
|
||||
log "Destination disk usage: $DEST_USAGE"
|
||||
|
||||
# Check backup drive SMART health (while drive is already active)
|
||||
log "Checking backup drive SMART health..."
|
||||
if smartctl -H "$BACKUP_DRIVE" >> "$LOGFILE" 2>&1; then
|
||||
SMART_STATUS="PASSED"
|
||||
log "Backup drive SMART health: PASSED"
|
||||
else
|
||||
SMART_STATUS="FAILED"
|
||||
log "WARNING: Backup drive SMART health check FAILED!"
|
||||
notify "WARNING" "Backup drive SMART health check FAILED - check logs"
|
||||
fi
|
||||
|
||||
# Get drive temperature while it's active
|
||||
DRIVE_TEMP=$(smartctl -a "$BACKUP_DRIVE" 2>/dev/null | grep "Temperature_Celsius" | awk '{print $10}')
|
||||
if [ -n "$DRIVE_TEMP" ]; then
|
||||
log "Backup drive temperature: ${DRIVE_TEMP}°C"
|
||||
fi
|
||||
|
||||
notify "SUCCESS" "Backup sync completed in ${DURATION}s - Disk usage: $DEST_USAGE - SMART: $SMART_STATUS"
|
||||
|
||||
log "=== Backup sync completed ==="
|
||||
exit 0
|
||||
else
|
||||
log "ERROR: Backup sync failed"
|
||||
notify "ERROR" "Backup sync failed - check logs"
|
||||
exit 1
|
||||
fi
|
||||
7
Ansible/roles/backup/handlers/main.yml
Normal file
7
Ansible/roles/backup/handlers/main.yml
Normal file
@@ -0,0 +1,7 @@
|
||||
---
|
||||
- name: Reload systemd
|
||||
ansible.builtin.systemd:
|
||||
daemon_reload: yes
|
||||
|
||||
- name: Reload udev rules
|
||||
ansible.builtin.command: udevadm control --reload-rules
|
||||
72
Ansible/roles/backup/tasks/main.yml
Normal file
72
Ansible/roles/backup/tasks/main.yml
Normal file
@@ -0,0 +1,72 @@
|
||||
---
|
||||
- name: Install required packages
|
||||
ansible.builtin.apt:
|
||||
name:
|
||||
- rsync
|
||||
- hdparm
|
||||
state: present
|
||||
update_cache: yes
|
||||
|
||||
- name: Check if backup drive partition exists
|
||||
ansible.builtin.stat:
|
||||
path: "{{ backup_drive_partition }}"
|
||||
register: backup_partition
|
||||
|
||||
- name: Create mount point for backup drive
|
||||
ansible.builtin.file:
|
||||
path: "{{ backup_destination }}"
|
||||
state: directory
|
||||
owner: root
|
||||
group: root
|
||||
mode: '0755'
|
||||
|
||||
- name: Add backup drive to fstab
|
||||
ansible.posix.mount:
|
||||
path: "{{ backup_destination }}"
|
||||
src: "UUID={{ backup_drive_uuid }}"
|
||||
fstype: ext4
|
||||
opts: defaults,nofail
|
||||
state: mounted
|
||||
when: backup_partition.stat.exists
|
||||
|
||||
- name: Install backup script
|
||||
ansible.builtin.copy:
|
||||
src: backup-mirror-sync.sh
|
||||
dest: "{{ backup_script_path }}"
|
||||
owner: root
|
||||
group: root
|
||||
mode: '0755'
|
||||
|
||||
- name: Install udev rule for backup drive power management
|
||||
ansible.builtin.template:
|
||||
src: 99-backup-drive-power.rules.j2
|
||||
dest: /etc/udev/rules.d/99-backup-drive-power.rules
|
||||
owner: root
|
||||
group: root
|
||||
mode: '0644'
|
||||
notify: Reload udev rules
|
||||
|
||||
- name: Install backup systemd service
|
||||
ansible.builtin.template:
|
||||
src: backup-mirror-sync.service.j2
|
||||
dest: /etc/systemd/system/backup-mirror-sync.service
|
||||
owner: root
|
||||
group: root
|
||||
mode: '0644'
|
||||
notify: Reload systemd
|
||||
|
||||
- name: Install backup systemd timer
|
||||
ansible.builtin.template:
|
||||
src: backup-mirror-sync.timer.j2
|
||||
dest: /etc/systemd/system/backup-mirror-sync.timer
|
||||
owner: root
|
||||
group: root
|
||||
mode: '0644'
|
||||
notify: Reload systemd
|
||||
|
||||
- name: Enable and start backup timer
|
||||
ansible.builtin.systemd:
|
||||
name: backup-mirror-sync.timer
|
||||
enabled: yes
|
||||
state: started
|
||||
daemon_reload: yes
|
||||
@@ -0,0 +1,6 @@
|
||||
# Power management for backup mirror drive
|
||||
# Spins down after 10 minutes of inactivity to save energy
|
||||
# The drive will auto-spinup on access during weekly backups
|
||||
|
||||
# Match by serial number for reliability
|
||||
ACTION=="add|change", KERNEL=="sd[a-z]", ATTRS{serial}=="{{ backup_drive_serial }}", RUN+="/usr/sbin/hdparm -S {{ backup_drive_spindown_timeout }} /dev/%k"
|
||||
14
Ansible/roles/backup/templates/backup-mirror-sync.service.j2
Normal file
14
Ansible/roles/backup/templates/backup-mirror-sync.service.j2
Normal file
@@ -0,0 +1,14 @@
|
||||
[Unit]
|
||||
Description=Backup Mirror Sync
|
||||
After=network.target local-fs.target mnt-bcache.mount mnt-backup\\x2dmirror.mount
|
||||
|
||||
[Service]
|
||||
Type=oneshot
|
||||
ExecStart={{ backup_script_path }}
|
||||
User=root
|
||||
StandardOutput=journal
|
||||
StandardError=journal
|
||||
LockPersonality=yes
|
||||
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
12
Ansible/roles/backup/templates/backup-mirror-sync.timer.j2
Normal file
12
Ansible/roles/backup/templates/backup-mirror-sync.timer.j2
Normal file
@@ -0,0 +1,12 @@
|
||||
[Unit]
|
||||
Description=Weekly Backup Mirror Sync Timer
|
||||
Requires=backup-mirror-sync.service
|
||||
|
||||
[Timer]
|
||||
# Run every Sunday at 2:00 AM
|
||||
OnCalendar={{ backup_schedule }}
|
||||
# If system was off, run 1 hour after boot
|
||||
Persistent=true
|
||||
|
||||
[Install]
|
||||
WantedBy=timers.target
|
||||
8
Ansible/roles/containers/defaults/main.yml
Normal file
8
Ansible/roles/containers/defaults/main.yml
Normal file
@@ -0,0 +1,8 @@
|
||||
---
|
||||
k3s_server_location: /var/lib/rancher/k3s
|
||||
# Current version running on kimchi (updated Oct 2025)
|
||||
k3s_version: v1.29.0+k3s1
|
||||
systemd_dir: /etc/systemd/system
|
||||
master_ip: "{{ hostvars[groups['x86'][0]['ansible_host']] | localhost }}"
|
||||
extra_server_args: ""
|
||||
extra_agent_args: ""
|
||||
103
Ansible/roles/containers/tasks/k3s.yml
Normal file
103
Ansible/roles/containers/tasks/k3s.yml
Normal file
@@ -0,0 +1,103 @@
|
||||
---
|
||||
- name: Enable IPv4 forwarding
|
||||
sysctl:
|
||||
name: net.ipv4.ip_forward
|
||||
value: "1"
|
||||
state: present
|
||||
reload: yes
|
||||
|
||||
- name: Enable IPv6 forwarding
|
||||
sysctl:
|
||||
name: net.ipv6.conf.all.forwarding
|
||||
value: "1"
|
||||
state: present
|
||||
reload: yes
|
||||
when: ansible_all_ipv6_addresses
|
||||
|
||||
- name: Download k3s binary x86_64
|
||||
get_url:
|
||||
url: https://github.com/k3s-io/k3s/releases/download/{{ k3s_version }}/k3s
|
||||
checksum: sha256:https://github.com/k3s-io/k3s/releases/download/{{ k3s_version }}/sha256sum-amd64.txt
|
||||
dest: /usr/local/bin/k3s
|
||||
owner: root
|
||||
group: root
|
||||
mode: 0755
|
||||
when: ansible_facts.architecture == "x86_64"
|
||||
|
||||
- name: Copy K3s service file
|
||||
register: k3s_service
|
||||
template:
|
||||
src: "k3s.service.j2"
|
||||
dest: "{{ systemd_dir }}/k3s.service"
|
||||
owner: root
|
||||
group: root
|
||||
mode: 0644
|
||||
|
||||
- name: Enable and check K3s service
|
||||
systemd:
|
||||
name: k3s
|
||||
daemon_reload: yes
|
||||
state: restarted
|
||||
enabled: yes
|
||||
|
||||
- name: Wait for node-token
|
||||
wait_for:
|
||||
path: "{{ k3s_server_location }}/server/node-token"
|
||||
|
||||
- name: Register node-token file access mode
|
||||
stat:
|
||||
path: "{{ k3s_server_location }}/server/node-token"
|
||||
register: p
|
||||
|
||||
- name: Change file access node-token
|
||||
file:
|
||||
path: "{{ k3s_server_location }}/server/node-token"
|
||||
mode: "g+rx,o+rx"
|
||||
|
||||
- name: Read node-token from master
|
||||
slurp:
|
||||
path: "{{ k3s_server_location }}/server/node-token"
|
||||
register: node_token
|
||||
|
||||
- name: Store Master node-token
|
||||
set_fact:
|
||||
token: "{{ node_token.content | b64decode | regex_replace('\n', '') }}"
|
||||
|
||||
- name: Restore node-token file access
|
||||
file:
|
||||
path: "{{ k3s_server_location }}/server/node-token"
|
||||
mode: "{{ p.stat.mode }}"
|
||||
|
||||
- name: Create directory .kube
|
||||
file:
|
||||
path: ~{{ ansible_user }}/.kube
|
||||
state: directory
|
||||
owner: "{{ ansible_user }}"
|
||||
mode: "u=rwx,g=rx,o="
|
||||
|
||||
- name: Copy config file to user home directory
|
||||
copy:
|
||||
src: /etc/rancher/k3s/k3s.yaml
|
||||
dest: ~{{ ansible_user }}/.kube/config
|
||||
remote_src: yes
|
||||
owner: "{{ ansible_user }}"
|
||||
mode: "u=rw,g=,o="
|
||||
|
||||
# - name: Replace https://localhost:6443 by https://master-ip:6443
|
||||
# command: >-
|
||||
# k3s kubectl config set-cluster default
|
||||
# --server=https://{{ master_ip }}:6443
|
||||
# --kubeconfig ~{{ ansible_user }}/.kube/config
|
||||
# changed_when: true
|
||||
|
||||
- name: Create kubectl symlink
|
||||
file:
|
||||
src: /usr/local/bin/k3s
|
||||
dest: /usr/local/bin/kubectl
|
||||
state: link
|
||||
|
||||
- name: Create crictl symlink
|
||||
file:
|
||||
src: /usr/local/bin/k3s
|
||||
dest: /usr/local/bin/crictl
|
||||
state: link
|
||||
3
Ansible/roles/containers/tasks/main.yml
Normal file
3
Ansible/roles/containers/tasks/main.yml
Normal file
@@ -0,0 +1,3 @@
|
||||
---
|
||||
- name: Include K3s setup tasks
|
||||
ansible.builtin.include_tasks: k3s.yml
|
||||
24
Ansible/roles/containers/templates/k3s.service.j2
Normal file
24
Ansible/roles/containers/templates/k3s.service.j2
Normal file
@@ -0,0 +1,24 @@
|
||||
[Unit]
|
||||
Description=Lightweight Kubernetes
|
||||
Documentation=https://k3s.io
|
||||
After=network-online.target
|
||||
|
||||
[Service]
|
||||
Type=notify
|
||||
ExecStartPre=-/sbin/modprobe br_netfilter
|
||||
ExecStartPre=-/sbin/modprobe overlay
|
||||
ExecStart=/usr/local/bin/k3s server --data-dir {{ k3s_server_location }} {{ extra_server_args | default("") }}
|
||||
KillMode=process
|
||||
Delegate=yes
|
||||
# Having non-zero Limit*s causes performance problems due to accounting overhead
|
||||
# in the kernel. We recommend using cgroups to do container-local accounting.
|
||||
LimitNOFILE=1048576
|
||||
LimitNPROC=infinity
|
||||
LimitCORE=infinity
|
||||
TasksMax=infinity
|
||||
TimeoutStartSec=0
|
||||
Restart=always
|
||||
RestartSec=5s
|
||||
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
8
Ansible/roles/maintenance/defaults/main.yml
Normal file
8
Ansible/roles/maintenance/defaults/main.yml
Normal file
@@ -0,0 +1,8 @@
|
||||
---
|
||||
# Quarterly maintenance script configuration
|
||||
maintenance_script_path: /usr/local/bin/k3s-maintenance.sh
|
||||
maintenance_log_path: /var/log/k3s-maintenance.log
|
||||
|
||||
# Systemd timer configuration
|
||||
# Run quarterly: January 1, April 1, July 1, October 1 at 3:00 AM
|
||||
maintenance_schedule: "*-01,04,07,10-01 03:00:00"
|
||||
89
Ansible/roles/maintenance/files/k3s-maintenance.sh
Executable file
89
Ansible/roles/maintenance/files/k3s-maintenance.sh
Executable file
@@ -0,0 +1,89 @@
|
||||
#!/bin/bash
|
||||
|
||||
# K3s Maintenance Script
|
||||
# Performs regular maintenance tasks including:
|
||||
# - Pruning unused container images
|
||||
# - Cleaning old journal logs
|
||||
# - Cleaning apt cache
|
||||
|
||||
set -euo pipefail
|
||||
|
||||
LOGFILE="/var/log/k3s-maintenance.log"
|
||||
|
||||
# Function to log messages
|
||||
log() {
|
||||
echo "$(date '+%Y-%m-%d %H:%M:%S') - $1" | tee -a "$LOGFILE"
|
||||
}
|
||||
|
||||
# Prune unused container images
|
||||
prune_container_images() {
|
||||
log "Starting container image cleanup"
|
||||
|
||||
# Get images before pruning
|
||||
local images_before=$(crictl images -q | wc -l)
|
||||
|
||||
if crictl rmi --prune >> "$LOGFILE" 2>&1; then
|
||||
local images_after=$(crictl images -q | wc -l)
|
||||
local images_removed=$((images_before - images_after))
|
||||
log "Container image cleanup completed: removed $images_removed images"
|
||||
else
|
||||
log "WARNING: Container image cleanup encountered errors"
|
||||
fi
|
||||
}
|
||||
|
||||
# Clean journal logs
|
||||
clean_journal_logs() {
|
||||
log "Starting journal log cleanup"
|
||||
|
||||
local before=$(journalctl --disk-usage 2>/dev/null | grep -oP '\d+\.\d+[A-Z]' | head -1)
|
||||
|
||||
if journalctl --vacuum-time=30d >> "$LOGFILE" 2>&1; then
|
||||
local after=$(journalctl --disk-usage 2>/dev/null | grep -oP '\d+\.\d+[A-Z]' | head -1)
|
||||
log "Journal log cleanup completed: $before -> $after"
|
||||
else
|
||||
log "WARNING: Journal log cleanup encountered errors"
|
||||
fi
|
||||
}
|
||||
|
||||
# Clean apt cache
|
||||
clean_apt_cache() {
|
||||
log "Starting apt cleanup"
|
||||
|
||||
if apt-get autoremove -y >> "$LOGFILE" 2>&1; then
|
||||
log "Apt autoremove completed"
|
||||
else
|
||||
log "WARNING: Apt autoremove encountered errors"
|
||||
fi
|
||||
|
||||
if apt-get autoclean -y >> "$LOGFILE" 2>&1; then
|
||||
log "Apt autoclean completed"
|
||||
else
|
||||
log "WARNING: Apt autoclean encountered errors"
|
||||
fi
|
||||
}
|
||||
|
||||
# Main execution
|
||||
main() {
|
||||
log "=== Starting K3s maintenance ==="
|
||||
|
||||
# Check if running as root
|
||||
if [[ $EUID -ne 0 ]]; then
|
||||
echo "This script must be run as root"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Run maintenance tasks
|
||||
prune_container_images
|
||||
clean_journal_logs
|
||||
clean_apt_cache
|
||||
|
||||
# Log disk usage after cleanup
|
||||
local root_usage=$(df -h / | tail -1 | awk '{print $5}')
|
||||
log "Root partition usage: $root_usage"
|
||||
|
||||
log "=== K3s maintenance completed ==="
|
||||
logger -t "k3s-maintenance" "Monthly maintenance completed - root partition at $root_usage"
|
||||
}
|
||||
|
||||
# Run main function
|
||||
main "$@"
|
||||
4
Ansible/roles/maintenance/handlers/main.yml
Normal file
4
Ansible/roles/maintenance/handlers/main.yml
Normal file
@@ -0,0 +1,4 @@
|
||||
---
|
||||
- name: Reload systemd
|
||||
ansible.builtin.systemd:
|
||||
daemon_reload: yes
|
||||
41
Ansible/roles/maintenance/tasks/main.yml
Normal file
41
Ansible/roles/maintenance/tasks/main.yml
Normal file
@@ -0,0 +1,41 @@
|
||||
---
|
||||
- name: Install maintenance script
|
||||
ansible.builtin.copy:
|
||||
src: k3s-maintenance.sh
|
||||
dest: "{{ maintenance_script_path }}"
|
||||
owner: root
|
||||
group: root
|
||||
mode: '0755'
|
||||
|
||||
- name: Create maintenance log directory
|
||||
ansible.builtin.file:
|
||||
path: "{{ maintenance_log_path | dirname }}"
|
||||
state: directory
|
||||
owner: root
|
||||
group: root
|
||||
mode: '0755'
|
||||
|
||||
- name: Install maintenance systemd service
|
||||
ansible.builtin.template:
|
||||
src: k3s-maintenance.service.j2
|
||||
dest: /etc/systemd/system/k3s-maintenance.service
|
||||
owner: root
|
||||
group: root
|
||||
mode: '0644'
|
||||
notify: Reload systemd
|
||||
|
||||
- name: Install maintenance systemd timer
|
||||
ansible.builtin.template:
|
||||
src: k3s-maintenance.timer.j2
|
||||
dest: /etc/systemd/system/k3s-maintenance.timer
|
||||
owner: root
|
||||
group: root
|
||||
mode: '0644'
|
||||
notify: Reload systemd
|
||||
|
||||
- name: Enable and start maintenance timer
|
||||
ansible.builtin.systemd:
|
||||
name: k3s-maintenance.timer
|
||||
enabled: yes
|
||||
state: started
|
||||
daemon_reload: yes
|
||||
@@ -0,0 +1,13 @@
|
||||
[Unit]
|
||||
Description=K3s Quarterly Maintenance
|
||||
After=network.target k3s.service
|
||||
|
||||
[Service]
|
||||
Type=oneshot
|
||||
ExecStart={{ maintenance_script_path }}
|
||||
User=root
|
||||
StandardOutput=journal
|
||||
StandardError=journal
|
||||
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
12
Ansible/roles/maintenance/templates/k3s-maintenance.timer.j2
Normal file
12
Ansible/roles/maintenance/templates/k3s-maintenance.timer.j2
Normal file
@@ -0,0 +1,12 @@
|
||||
[Unit]
|
||||
Description=K3s Quarterly Maintenance Timer
|
||||
Requires=k3s-maintenance.service
|
||||
|
||||
[Timer]
|
||||
# Run quarterly: January 1, April 1, July 1, October 1 at 3:00 AM
|
||||
OnCalendar={{ maintenance_schedule }}
|
||||
# If the system was off when the timer should have run, run it when the system boots
|
||||
Persistent=true
|
||||
|
||||
[Install]
|
||||
WantedBy=timers.target
|
||||
10
Ansible/roles/neovim/files/after/plugin/colors.lua
Normal file
10
Ansible/roles/neovim/files/after/plugin/colors.lua
Normal file
@@ -0,0 +1,10 @@
|
||||
function ColorMyPencils(color)
|
||||
color = color or "rose-pine"
|
||||
vim.cmd.colorscheme(color)
|
||||
|
||||
vim.api.nvim_set_hl(0, "Normal", { bg = "none" })
|
||||
vim.api.nvim_set_hl(0, "NormalFloat", { bg = "none" })
|
||||
|
||||
end
|
||||
|
||||
ColorMyPencils()
|
||||
29
Ansible/roles/neovim/files/after/plugin/fugitiv.lua
Normal file
29
Ansible/roles/neovim/files/after/plugin/fugitiv.lua
Normal file
@@ -0,0 +1,29 @@
|
||||
vim.keymap.set("n", "<leader>gs", vim.cmd.Git)
|
||||
|
||||
local ThePrimeagen_Fugitive = vim.api.nvim_create_augroup("ThePrimeagen_Fugitive", {})
|
||||
|
||||
local autocmd = vim.api.nvim_create_autocmd
|
||||
autocmd("BufWinEnter", {
|
||||
group = ThePrimeagen_Fugitive,
|
||||
pattern = "*",
|
||||
callback = function()
|
||||
if vim.bo.ft ~= "fugitive" then
|
||||
return
|
||||
end
|
||||
|
||||
local bufnr = vim.api.nvim_get_current_buf()
|
||||
local opts = {buffer = bufnr, remap = false}
|
||||
vim.keymap.set("n", "<leader>p", function()
|
||||
vim.cmd.Git('push')
|
||||
end, opts)
|
||||
|
||||
-- rebase always
|
||||
vim.keymap.set("n", "<leader>P", function()
|
||||
vim.cmd.Git({'pull', '--rebase'})
|
||||
end, opts)
|
||||
|
||||
-- NOTE: It allows me to easily set the branch i am pushing and any tracking
|
||||
-- needed if i did not set the branch up correctly
|
||||
vim.keymap.set("n", "<leader>t", ":Git push -u origin ", opts);
|
||||
end,
|
||||
})
|
||||
10
Ansible/roles/neovim/files/after/plugin/harpoon.lua
Normal file
10
Ansible/roles/neovim/files/after/plugin/harpoon.lua
Normal file
@@ -0,0 +1,10 @@
|
||||
local mark = require("harpoon.mark")
|
||||
local ui = require("harpoon.ui")
|
||||
|
||||
vim.keymap.set("n", "<leader>a", mark.add_file)
|
||||
vim.keymap.set("n", "<C-e", ui.toggle_quick_menu)
|
||||
|
||||
vim.keymap.set("n", "<C-h>", function() ui.nav_file(1) end)
|
||||
vim.keymap.set("n", "<C-t>", function() ui.nav_file(2) end)
|
||||
vim.keymap.set("n", "<C-n>", function() ui.nav_file(3) end)
|
||||
vim.keymap.set("n", "<C-s>", function() ui.nav_file(4) end)
|
||||
65
Ansible/roles/neovim/files/after/plugin/lsp.lua
Normal file
65
Ansible/roles/neovim/files/after/plugin/lsp.lua
Normal file
@@ -0,0 +1,65 @@
|
||||
local lsp = require("lsp-zero")
|
||||
|
||||
lsp.preset("recommended")
|
||||
|
||||
lsp.ensure_installed({
|
||||
'tsserver',
|
||||
'eslint',
|
||||
'sumneko_lua',
|
||||
'rust_analyzer',
|
||||
})
|
||||
|
||||
local cmp = require('cmp')
|
||||
local cmp_select = {behavior = cmp.SelectBehavior.Select}
|
||||
local cmp_mappings = lsp.defaults.cmp_mappings({
|
||||
['<C-p>'] = cmp.mapping.select_prev_item(cmp_select),
|
||||
['<C-n>'] = cmp.mapping.select_next_item(cmp_select),
|
||||
['<C-y>'] = cmp.mapping.confirm({ select = true }),
|
||||
["<C-Space>"] = cmp.mapping.complete(),
|
||||
})
|
||||
|
||||
-- disable completion with tab
|
||||
-- this helps with copilot setup
|
||||
cmp_mappings['<Tab>'] = nil
|
||||
cmp_mappings['<S-Tab>'] = nil
|
||||
|
||||
lsp.setup_nvim_cmp({
|
||||
mapping = cmp_mappings
|
||||
})
|
||||
|
||||
lsp.set_preferences({
|
||||
suggest_lsp_servers = false,
|
||||
sign_icons = {
|
||||
error = 'E',
|
||||
warn = 'W',
|
||||
hint = 'H',
|
||||
info = 'I'
|
||||
}
|
||||
})
|
||||
|
||||
vim.diagnostic.config({
|
||||
virtual_text = true,
|
||||
})
|
||||
|
||||
lsp.on_attach(function(client, bufnr)
|
||||
local opts = {buffer = bufnr, remap = false}
|
||||
|
||||
if client.name == "eslint" then
|
||||
vim.cmd.LspStop('eslint')
|
||||
return
|
||||
end
|
||||
|
||||
vim.keymap.set("n", "gd", vim.lsp.buf.definition, opts)
|
||||
vim.keymap.set("n", "K", vim.lsp.buf.hover, opts)
|
||||
vim.keymap.set("n", "<leader>vws", vim.lsp.buf.workspace_symbol, opts)
|
||||
vim.keymap.set("n", "<leader>vd", vim.diagnostic.open_float, opts)
|
||||
vim.keymap.set("n", "[d", vim.diagnostic.goto_next, opts)
|
||||
vim.keymap.set("n", "]d", vim.diagnostic.goto_prev, opts)
|
||||
vim.keymap.set("n", "<leader>vca", vim.lsp.buf.code_action, opts)
|
||||
vim.keymap.set("n", "<leader>vrr", vim.lsp.buf.references, opts)
|
||||
vim.keymap.set("n", "<leader>vrn", vim.lsp.buf.rename, opts)
|
||||
vim.keymap.set("i", "<C-h>", vim.lsp.buf.signature_help, opts)
|
||||
end)
|
||||
|
||||
lsp.setup()
|
||||
|
||||
6
Ansible/roles/neovim/files/after/plugin/telescope.lua
Normal file
6
Ansible/roles/neovim/files/after/plugin/telescope.lua
Normal file
@@ -0,0 +1,6 @@
|
||||
local builtin = require('telescope.builtin')
|
||||
vim.keymap.set('n', '<leader>pf', builtin.find_files, {})
|
||||
vim.keymap.set('n', '<C-p>', builtin.git_files, {})
|
||||
vim.keymap.set('n', '<leader>ps', function()
|
||||
builtin.grep_string({ search = vim.fn.input("Grep > ") })
|
||||
end)
|
||||
22
Ansible/roles/neovim/files/after/plugin/treesitter.lua
Normal file
22
Ansible/roles/neovim/files/after/plugin/treesitter.lua
Normal file
@@ -0,0 +1,22 @@
|
||||
require'nvim-treesitter.configs'.setup {
|
||||
-- A list of parser names, or "all"
|
||||
ensure_installed = { "help", "javascript", "typescript", "c", "lua", "go", "rust" },
|
||||
|
||||
-- Install parsers synchronously (only applied to `ensure_installed`)
|
||||
sync_install = false,
|
||||
|
||||
-- Automatically install missing parsers when entering buffer
|
||||
-- Recommendation: set to false if you don't have `tree-sitter` CLI installed locally
|
||||
auto_install = true,
|
||||
|
||||
highlight = {
|
||||
-- `false` will disable the whole extension
|
||||
enable = true,
|
||||
|
||||
-- Setting this to true will run `:h syntax` and tree-sitter at the same time.
|
||||
-- Set this to `true` if you depend on 'syntax' being enabled (like for indentation).
|
||||
-- Using this option may slow down your editor, and you may see some duplicate highlights.
|
||||
-- Instead of true it can also be a list of languages
|
||||
additional_vim_regex_highlighting = false,
|
||||
},
|
||||
}
|
||||
1
Ansible/roles/neovim/files/after/plugin/undotree.lua
Normal file
1
Ansible/roles/neovim/files/after/plugin/undotree.lua
Normal file
@@ -0,0 +1 @@
|
||||
vim.keymap.set.("n", "<leader>u", vim.cmd.UndotreeToggle)
|
||||
1
Ansible/roles/neovim/files/init.lua
Normal file
1
Ansible/roles/neovim/files/init.lua
Normal file
@@ -0,0 +1 @@
|
||||
require("tas")
|
||||
33
Ansible/roles/neovim/files/lua/tas/init.lua
Normal file
33
Ansible/roles/neovim/files/lua/tas/init.lua
Normal file
@@ -0,0 +1,33 @@
|
||||
require("tas.set")
|
||||
require("tas.remap")
|
||||
|
||||
local augroup = vim.api.nvim_create_augroup
|
||||
local ThePrimeagenGroup = augroup('ThePrimeagen', {})
|
||||
|
||||
local autocmd = vim.api.nvim_create_autocmd
|
||||
local yank_group = augroup('HighlightYank', {})
|
||||
|
||||
function R(name)
|
||||
require("plenary.reload").reload_module(name)
|
||||
end
|
||||
|
||||
autocmd('TextYankPost', {
|
||||
group = yank_group,
|
||||
pattern = '*',
|
||||
callback = function()
|
||||
vim.highlight.on_yank({
|
||||
higroup = 'IncSearch',
|
||||
timeout = 40,
|
||||
})
|
||||
end,
|
||||
})
|
||||
|
||||
autocmd({"BufWritePre"}, {
|
||||
group = ThePrimeagenGroup,
|
||||
pattern = "*",
|
||||
command = [[%s/\s\+$//e]],
|
||||
})
|
||||
|
||||
vim.g.netrw_browse_split = 0
|
||||
vim.g.netrw_banner = 0
|
||||
vim.g.netrw_winsize = 25
|
||||
50
Ansible/roles/neovim/files/lua/tas/packer.lua
Normal file
50
Ansible/roles/neovim/files/lua/tas/packer.lua
Normal file
@@ -0,0 +1,50 @@
|
||||
-- This file can be loaded by calling `lua require('plugins')` from your init.vim
|
||||
|
||||
-- Only required if you have packer configured as `opt`
|
||||
vim.cmd [[packadd packer.nvim]]
|
||||
|
||||
return require('packer').startup(function(use)
|
||||
-- Packer can manage itself
|
||||
use 'wbthomason/packer.nvim'
|
||||
|
||||
use {
|
||||
'nvim-telescope/telescope.nvim', tag = '0.1.0',
|
||||
-- or , branch = '0.1.x',
|
||||
requires = { {'nvim-lua/plenary.nvim'} }
|
||||
}
|
||||
|
||||
use({
|
||||
'rose-pine/neovim',
|
||||
as = 'rose-pine',
|
||||
config = function()
|
||||
vim.cmd('colorscheme rose-pine')
|
||||
end
|
||||
})
|
||||
|
||||
use({'nvim-treesitter/nvim-treesitter', run = ':TSUpdate'})
|
||||
use('theprimeagen/harpoon')
|
||||
use('mbbill/undotree')
|
||||
use('tpope/vim-fugitive')
|
||||
|
||||
use {
|
||||
'VonHeikemen/lsp-zero.nvim',
|
||||
requires = {
|
||||
-- LSP Support
|
||||
{'neovim/nvim-lspconfig'},
|
||||
{'williamboman/mason.nvim'},
|
||||
{'williamboman/mason-lspconfig.nvim'},
|
||||
|
||||
-- Autocompletion
|
||||
{'hrsh7th/nvim-cmp'},
|
||||
{'hrsh7th/cmp-buffer'},
|
||||
{'hrsh7th/cmp-path'},
|
||||
{'saadparwaiz1/cmp_luasnip'},
|
||||
{'hrsh7th/cmp-nvim-lsp'},
|
||||
{'hrsh7th/cmp-nvim-lua'},
|
||||
|
||||
-- Snippets
|
||||
{'L3MON4D3/LuaSnip'},
|
||||
{'rafamadriz/friendly-snippets'},
|
||||
}
|
||||
}
|
||||
end)
|
||||
43
Ansible/roles/neovim/files/lua/tas/remap.lua
Normal file
43
Ansible/roles/neovim/files/lua/tas/remap.lua
Normal file
@@ -0,0 +1,43 @@
|
||||
|
||||
vim.g.mapleader = " "
|
||||
vim.keymap.set("n", "<leader>pv", vim.cmd.Ex)
|
||||
|
||||
vim.keymap.set("v", "J", ":m '>+1<CR>gv=gv")
|
||||
vim.keymap.set("v", "K", ":m '<-2<CR>gv=gv")
|
||||
|
||||
vim.keymap.set("n", "J", "mzJ`z")
|
||||
vim.keymap.set("n", "<C-d>", "<C-d>zz")
|
||||
vim.keymap.set("n", "<C-u>", "<C-u>zz")
|
||||
vim.keymap.set("n", "n", "nzzzv")
|
||||
vim.keymap.set("n", "N", "Nzzzv")
|
||||
|
||||
vim.keymap.set("n", "<leader>vwm", function()
|
||||
require("vim-with-me").StartVimWithMe()
|
||||
end)
|
||||
vim.keymap.set("n", "<leader>svwm", function()
|
||||
require("vim-with-me").StopVimWithMe()
|
||||
end)
|
||||
|
||||
-- greatest remap ever
|
||||
vim.keymap.set("x", "<leader>p", [["_dP]])
|
||||
|
||||
-- next greatest remap ever : asbjornHaland
|
||||
vim.keymap.set({"n", "v"}, "<leader>y", [["+y]])
|
||||
vim.keymap.set("n", "<leader>Y", [["+Y]])
|
||||
|
||||
vim.keymap.set({"n", "v"}, "<leader>d", [["_d]])
|
||||
|
||||
-- This is going to get me cancelled
|
||||
vim.keymap.set("i", "<C-c>", "<Esc>")
|
||||
|
||||
vim.keymap.set("n", "Q", "<nop>")
|
||||
vim.keymap.set("n", "<C-f>", "<cmd>silent !tmux neww tmux-sessionizer<CR>")
|
||||
vim.keymap.set("n", "<leader>f", vim.lsp.buf.format)
|
||||
|
||||
vim.keymap.set("n", "<C-k>", "<cmd>cnext<CR>zz")
|
||||
vim.keymap.set("n", "<C-j>", "<cmd>cprev<CR>zz")
|
||||
vim.keymap.set("n", "<leader>k", "<cmd>lnext<CR>zz")
|
||||
vim.keymap.set("n", "<leader>j", "<cmd>lprev<CR>zz")
|
||||
|
||||
vim.keymap.set("n", "<leader>s", [[:%s/\<<C-r><C-w>\>/<C-r><C-w>/gI<Left><Left><Left>]])
|
||||
vim.keymap.set("n", "<leader>x", "<cmd>!chmod +x %<CR>", { silent = true })
|
||||
30
Ansible/roles/neovim/files/lua/tas/set.lua
Normal file
30
Ansible/roles/neovim/files/lua/tas/set.lua
Normal file
@@ -0,0 +1,30 @@
|
||||
vim.opt.nu = true
|
||||
vim.opt.relativenumber = true
|
||||
|
||||
vim.opt.tabstop = 4
|
||||
vim.opt.softtabstop = 4
|
||||
vim.opt.shiftwidth = 4
|
||||
vim.opt.expandtab = true
|
||||
|
||||
vim.opt.smartindent = true
|
||||
|
||||
vim.opt.wrap = false
|
||||
|
||||
vim.opt.swapfile = false
|
||||
vim.opt.backup = false
|
||||
vim.opt.undodir = os.getenv("HOME") .. "/.vim/undodir"
|
||||
vim.opt.undofile = true
|
||||
|
||||
vim.opt.hlsearch = false
|
||||
vim.opt.incsearch = true
|
||||
|
||||
vim.opt.termguicolors = true
|
||||
|
||||
vim.opt.scrolloff = 8
|
||||
vim.opt.signalcolumn = "yes"
|
||||
vim.opt.isfname:append("@-@")
|
||||
|
||||
vim.opt.updatetime = 50
|
||||
vim.opt.colorcolumn = "80"
|
||||
|
||||
vim.g.mapleader = " "
|
||||
148
Ansible/roles/neovim/tasks/main.yml
Normal file
148
Ansible/roles/neovim/tasks/main.yml
Normal file
@@ -0,0 +1,148 @@
|
||||
---
|
||||
- name: Check if neovim is installed via the package manager
|
||||
package_facts:
|
||||
manager: auto
|
||||
|
||||
- name: Check if neovim is installed
|
||||
command:
|
||||
cmd: which nvim
|
||||
register: nvim
|
||||
changed_when: False
|
||||
failed_when: False
|
||||
|
||||
- name: Check the nvim version if installed
|
||||
shell:
|
||||
cmd: "nvim --version | head -n 1 | cut -c 6-"
|
||||
register: neovim_version
|
||||
when: nvim.rc == 0
|
||||
changed_when: False
|
||||
|
||||
- name: Set current neovim version to 0 if not installed
|
||||
set_fact:
|
||||
neovim_version:
|
||||
stdout: 0
|
||||
when: nvim.rc == 1
|
||||
|
||||
- name: Install python3 and pip
|
||||
package:
|
||||
name:
|
||||
- python3
|
||||
- python3-pip
|
||||
state: latest
|
||||
|
||||
- name: Install github3 module
|
||||
pip:
|
||||
name:
|
||||
- github3.py
|
||||
|
||||
- name: Check latest neovim release
|
||||
# community.github_rlease only works for repos with ever increasing release numbers (!stable, !lts)
|
||||
block:
|
||||
- name: Get the latest release tag
|
||||
github_release:
|
||||
user: neovim
|
||||
repo: neovim
|
||||
action: latest_release
|
||||
# token: "{{ github_token }}"
|
||||
register: neovim_github_release
|
||||
changed_when: neovim_github_release.tag != neovim_version.stdout
|
||||
|
||||
- name: Determine latest neovim release (local)
|
||||
when: (neovim_github_release.tag == "stable") or (neovim_github_release.tag == "latest")
|
||||
delegate_to: localhost
|
||||
become: false
|
||||
uri:
|
||||
url: "https://api.github.com/repos/neovim/neovim/releases"
|
||||
body_format: json
|
||||
register: _github_releases
|
||||
until: _github_releases.status == 200
|
||||
retries: 5
|
||||
|
||||
- name: Set neovim_release
|
||||
set_fact:
|
||||
neovim_release: "{{ _github_releases.json
|
||||
| json_query('[?prerelease==`false` && draft==`false`].tag_name')
|
||||
| community.general.version_sort
|
||||
| last }}"
|
||||
#| regex_replace('^v?(.*)$', '\\1') }}"
|
||||
run_once: true
|
||||
|
||||
- name: Print neovim verion
|
||||
debug:
|
||||
msg: "neovim_release: {{ neovim_release }} neovim_version.stdout: {{ neovim_version.stdout }}"
|
||||
|
||||
|
||||
- name: Delete the old version of nvim via the package manager
|
||||
package:
|
||||
name: neovim
|
||||
state: absent
|
||||
when: "'neovim' in ansible_facts.packages and neovim_release != neovim_version.stdout"
|
||||
|
||||
- name: Install neovim
|
||||
when: neovim_release != neovim_version.stdout
|
||||
block:
|
||||
- name: Check if the node repo is present
|
||||
stat:
|
||||
path: "/etc/apt/sources.list.d/nodesource.list"
|
||||
register: nodesource
|
||||
|
||||
- name: Add the node repo
|
||||
shell:
|
||||
cmd: "curl -sL https://deb.nodesource.com/setup_16.x | bash -"
|
||||
when: not nodesource.stat.exists
|
||||
tags:
|
||||
- skip_ansible_lint
|
||||
|
||||
- name: Install the dependencies
|
||||
package:
|
||||
name:
|
||||
- golang
|
||||
- ninja-build
|
||||
- gettext
|
||||
- libtool
|
||||
- libtool-bin
|
||||
- autoconf
|
||||
- automake
|
||||
- cmake
|
||||
- g++
|
||||
- pkg-config
|
||||
- unzip
|
||||
- curl
|
||||
- doxygen
|
||||
state: present
|
||||
|
||||
- name: Install python modules
|
||||
pip:
|
||||
name:
|
||||
- setuptools
|
||||
- pynvim
|
||||
|
||||
- name: Grab the latest release source
|
||||
when: neovim_release != neovim_version.stdout
|
||||
unarchive:
|
||||
src: "https://github.com/neovim/neovim/archive/{{ neovim_github_release['tag'] }}.tar.gz"
|
||||
dest: /tmp
|
||||
remote_src: true
|
||||
|
||||
- name: Get the neovim folder
|
||||
when: neovim_release != neovim_version.stdout
|
||||
find:
|
||||
paths: /tmp
|
||||
patterns: "^neovim.*$"
|
||||
use_regex: yes
|
||||
file_type: directory
|
||||
recurse: no
|
||||
register: neovim_source
|
||||
|
||||
- name: Compile and install neovim
|
||||
when: neovim_release != neovim_version.stdout
|
||||
shell:
|
||||
cmd: cd {{ neovim_source.files[0].path }} && make install
|
||||
|
||||
- name: Clean up
|
||||
when: neovim_release != neovim_version.stdout
|
||||
file:
|
||||
path: "{{ neovim_source.files[0].path }}"
|
||||
state: absent
|
||||
|
||||
- include_tasks: plugins.yml
|
||||
25
Ansible/roles/neovim/tasks/plugins.yml
Normal file
25
Ansible/roles/neovim/tasks/plugins.yml
Normal file
@@ -0,0 +1,25 @@
|
||||
---
|
||||
- name: Check if packer (package management) is installed
|
||||
stat:
|
||||
path: "~{{ ansible_user }}/.local/share/nvim/site/pack/packer/start/packer.nvim"
|
||||
register: packer
|
||||
|
||||
- name: Install packer (package management)
|
||||
when: not packer.stat.exists
|
||||
become: false
|
||||
shell:
|
||||
cmd: "git clone --depth 1 https://github.com/wbthomason/packer.nvim \
|
||||
~{{ ansible_user }}/.local/share/nvim/site/pack/packer/start/packer.nvim"
|
||||
|
||||
- name: Copy lua folder (config scripts)
|
||||
copy:
|
||||
src: ../files/
|
||||
dest: ~{{ ansible_user }}/.config/nvim/.
|
||||
owner: "{{ ansible_user }}"
|
||||
mode: "u=rw,g=,o="
|
||||
register: files
|
||||
|
||||
#- name: Install the packages
|
||||
#when: files.changed
|
||||
#shell:
|
||||
#cmd: "nvim --headless -c 'autocmd User PackerComplete quitall' -c 'PackerSync'"
|
||||
29
Ansible/roles/smart_monitoring/defaults/main.yml
Normal file
29
Ansible/roles/smart_monitoring/defaults/main.yml
Normal file
@@ -0,0 +1,29 @@
|
||||
---
|
||||
# SMART monitoring configuration
|
||||
|
||||
# Drives to monitor continuously (24/7)
|
||||
smart_active_drives:
|
||||
- device: /dev/sdb
|
||||
description: "Main Storage - bcache backing device"
|
||||
type: hdd
|
||||
temp_warn: 45
|
||||
temp_crit: 50
|
||||
- device: /dev/nvme0n1
|
||||
description: "System + Cache Drive"
|
||||
type: nvme
|
||||
temp_warn: 60
|
||||
temp_crit: 65
|
||||
|
||||
# Backup drive (monitored during backups only, not 24/7)
|
||||
smart_backup_drive:
|
||||
device: /dev/sda
|
||||
description: "Backup Mirror Drive - DISABLED from continuous monitoring"
|
||||
reason: "Spins down after 10 minutes to save energy, checked during weekly backups"
|
||||
|
||||
# Test schedule
|
||||
# S/../../../HH = Short test daily at HH:00
|
||||
# L/../../D/HH = Long test weekly on day D at HH:00
|
||||
smart_test_schedule:
|
||||
short: "02" # 2 AM daily
|
||||
long_day: "6" # Saturday (0=Sunday, 6=Saturday)
|
||||
long_hour: "03" # 3 AM
|
||||
5
Ansible/roles/smart_monitoring/handlers/main.yml
Normal file
5
Ansible/roles/smart_monitoring/handlers/main.yml
Normal file
@@ -0,0 +1,5 @@
|
||||
---
|
||||
- name: Restart smartmontools
|
||||
ansible.builtin.systemd:
|
||||
name: smartmontools
|
||||
state: restarted
|
||||
28
Ansible/roles/smart_monitoring/tasks/main.yml
Normal file
28
Ansible/roles/smart_monitoring/tasks/main.yml
Normal file
@@ -0,0 +1,28 @@
|
||||
---
|
||||
- name: Install smartmontools
|
||||
ansible.builtin.apt:
|
||||
name: smartmontools
|
||||
state: present
|
||||
update_cache: yes
|
||||
|
||||
- name: Backup existing smartd.conf
|
||||
ansible.builtin.copy:
|
||||
src: /etc/smartd.conf
|
||||
dest: /etc/smartd.conf.bak
|
||||
remote_src: yes
|
||||
force: no
|
||||
|
||||
- name: Configure smartd monitoring
|
||||
ansible.builtin.template:
|
||||
src: smartd.conf.j2
|
||||
dest: /etc/smartd.conf
|
||||
owner: root
|
||||
group: root
|
||||
mode: '0644'
|
||||
notify: Restart smartmontools
|
||||
|
||||
- name: Enable and start smartmontools service
|
||||
ansible.builtin.systemd:
|
||||
name: smartmontools
|
||||
enabled: yes
|
||||
state: started
|
||||
20
Ansible/roles/smart_monitoring/templates/smartd.conf.j2
Normal file
20
Ansible/roles/smart_monitoring/templates/smartd.conf.j2
Normal file
@@ -0,0 +1,20 @@
|
||||
# smartd configuration for home server
|
||||
# Generated by Ansible - do not edit manually
|
||||
# Monitor specific drives with custom settings
|
||||
|
||||
# {{ smart_backup_drive.description }}
|
||||
# {{ smart_backup_drive.reason }}
|
||||
# Manual check: sudo smartctl -a {{ smart_backup_drive.device }}
|
||||
# {{ smart_backup_drive.device }} -a -o on -S on -s (S/../.././{{ smart_test_schedule.short }}|L/../../{{ smart_test_schedule.long_day }}/{{ smart_test_schedule.long_hour }}) -W 5,45,50 -m root -M exec /usr/share/smartmontools/smartd-runner
|
||||
|
||||
{% for drive in smart_active_drives %}
|
||||
# {{ drive.description }}
|
||||
# Active 24/7, monitor continuously with daily tests
|
||||
{% if drive.type == 'nvme' %}
|
||||
# NVMe runs hotter, warn at {{ drive.temp_warn }}°C, critical at {{ drive.temp_crit }}°C
|
||||
{% else %}
|
||||
# Warn at {{ drive.temp_warn }}°C, critical at {{ drive.temp_crit }}°C
|
||||
{% endif %}
|
||||
{{ drive.device }} -a -o on -S on -s (S/../.././{{ smart_test_schedule.short }}|L/../../{{ smart_test_schedule.long_day }}/{{ smart_test_schedule.long_hour }}) -W 5,{{ drive.temp_warn }},{{ drive.temp_crit }} -m root -M exec /usr/share/smartmontools/smartd-runner
|
||||
|
||||
{% endfor %}
|
||||
15
Ansible/roles/system/defaults/main.yml
Normal file
15
Ansible/roles/system/defaults/main.yml
Normal file
@@ -0,0 +1,15 @@
|
||||
---
|
||||
username: tas
|
||||
password: EckeKlarGleich1!
|
||||
extra_packages:
|
||||
- htop
|
||||
- zsh
|
||||
- kubectl
|
||||
- podman
|
||||
auth_keys:
|
||||
- "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDpvDh3VvtAuaPx+5j4Xfc1kAkB8jV4qyyh2dEyeWfkrzNt6AB3i0snKX17XD4M26o4GKzEcv9HJWAT4o60V6w+B3nKbxWsTZLaldDmXU5zvVlrnwvyuyVbgUjWvsYafBipjJxLUPSouOPZV7HH39cw2VXWASLbe+2ULo10YQWT6kCoYx+hUvll78dP02uPhf7vplFf48WosEGRPl8RhzI5HFxRazcnt7UtO5pM0fAi/vmpy9Sqi3Hdta13y+o9ZYnDV/VGGHkyP5ArLxRKKLAUZcFoUS16R9JnQX49waGkd+WNQxNmy77m1imINy8/zXk8dfjoMFNnBcgToI+eeg3p thoma@TAS-PC"
|
||||
- "ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBO3ybRI7itUDyTRUpdwVS1tZlsUGqtqwdaoLBZr4D6h72HEM0/aAUiJMAzgvyPVfqP81Z4VTwKOlnIl4/tPn+qg= thomas@pop"
|
||||
- "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIH2lJQtIGjg5U/zWRIRmbbfRDsQ3nvlHzHvG4cFeF77c phone-auth"
|
||||
|
||||
shell: /usr/bin/zsh
|
||||
dotfiles_repo: 'https://github.com/ThePrimeagen/.dotfiles.git'
|
||||
24
Ansible/roles/system/tasks/dotfiles.yml
Normal file
24
Ansible/roles/system/tasks/dotfiles.yml
Normal file
@@ -0,0 +1,24 @@
|
||||
---
|
||||
- name: Chown the repo
|
||||
file:
|
||||
path: '/home/{{ username }}/dotfiles'
|
||||
recurse: yes
|
||||
state: directory
|
||||
owner: '{{ username }}'
|
||||
group: '{{ username }}'
|
||||
|
||||
- name: Clone the latest dotfiles repo
|
||||
become_user: '{{ username }}'
|
||||
git:
|
||||
repo: '{{ dotfiles_repo }}'
|
||||
dest: '/home/{{ username }}/dotfiles'
|
||||
recursive: no
|
||||
force: yes
|
||||
|
||||
- name: Stow the dotfiles
|
||||
become_user: '{{ username }}'
|
||||
shell:
|
||||
cmd: stow -v */
|
||||
chdir: '/home/{{ username }}/dotfiles'
|
||||
register: stow_result
|
||||
changed_when: stow_result.stdout != ""
|
||||
55
Ansible/roles/system/tasks/essential.yml
Normal file
55
Ansible/roles/system/tasks/essential.yml
Normal file
@@ -0,0 +1,55 @@
|
||||
---
|
||||
- name: Update and upgrade packages
|
||||
apt:
|
||||
update_cache: yes
|
||||
upgrade: yes
|
||||
autoremove: yes
|
||||
|
||||
# Note: Kubernetes APT repository has been deprecated
|
||||
# K3s provides its own kubectl via symlink at /usr/local/bin/kubectl
|
||||
# No need to install kubectl separately
|
||||
|
||||
- name: Check if reboot required
|
||||
stat:
|
||||
path: /var/run/reboot-required
|
||||
register: reboot_required_file
|
||||
|
||||
- name: Reboot if required
|
||||
reboot:
|
||||
msg: Rebooting due to a kernel update
|
||||
when: reboot_required_file.stat.exists
|
||||
|
||||
- name: Install extra packages
|
||||
package:
|
||||
name: "{{ extra_packages }}"
|
||||
state: present
|
||||
|
||||
- name: Set the hostname
|
||||
hostname:
|
||||
name: "{{ inventory_hostname }}"
|
||||
|
||||
- name: Replace the hostname entry with our own
|
||||
ansible.builtin.lineinfile:
|
||||
path: /etc/hosts
|
||||
insertafter: ^127\.0\.0\.1 *localhost
|
||||
line: "127.0.1.1 {{ inventory_hostname }}"
|
||||
owner: root
|
||||
group: root
|
||||
mode: '0644'
|
||||
|
||||
- name: Disable cron e-mail notifications
|
||||
cron:
|
||||
name: MAILTO
|
||||
user: root
|
||||
env: yes
|
||||
job: ""
|
||||
|
||||
- name: Set UDP buffer sizes for cloudflared/QUIC performance
|
||||
ansible.posix.sysctl:
|
||||
name: "{{ item.name }}"
|
||||
value: "{{ item.value }}"
|
||||
state: present
|
||||
reload: yes
|
||||
loop:
|
||||
- { name: 'net.core.rmem_max', value: '8000000' }
|
||||
- { name: 'net.core.wmem_max', value: '8000000' }
|
||||
9
Ansible/roles/system/tasks/main.yml
Normal file
9
Ansible/roles/system/tasks/main.yml
Normal file
@@ -0,0 +1,9 @@
|
||||
---
|
||||
- name: Include essential tasks
|
||||
ansible.builtin.include_tasks: essential.yml
|
||||
|
||||
- name: Include user tasks
|
||||
ansible.builtin.include_tasks: user.yml
|
||||
|
||||
# - name: Include dotfiles tasks
|
||||
# ansible.builtin.include_tasks: dotfiles.yml
|
||||
59
Ansible/roles/system/tasks/user.yml
Normal file
59
Ansible/roles/system/tasks/user.yml
Normal file
@@ -0,0 +1,59 @@
|
||||
---
|
||||
- name: Set the name of a sudo group
|
||||
set_fact:
|
||||
sudo_group: sudo
|
||||
|
||||
- name: Ensure the necessary groupsexists
|
||||
group:
|
||||
name: "{{ item }}"
|
||||
state: present
|
||||
loop:
|
||||
- "{{ username }}"
|
||||
# - docker
|
||||
|
||||
- name: Create a login user
|
||||
user:
|
||||
name: "{{ username }}"
|
||||
password: "{{ password | password_hash('sha512') }}"
|
||||
groups:
|
||||
- "{{ sudo_group }}"
|
||||
# - docker
|
||||
- users
|
||||
state: present
|
||||
append: true
|
||||
|
||||
- name: Chmod the user home directory
|
||||
file:
|
||||
path: "/home/{{ username }}"
|
||||
state: directory
|
||||
mode: 0755
|
||||
owner: "{{ username }}"
|
||||
group: "{{ username }}"
|
||||
recurse: yes
|
||||
|
||||
- name: Allow '{{ sudo_group }}' group to have passwordless sudo
|
||||
lineinfile:
|
||||
path: /etc/sudoers
|
||||
state: present
|
||||
regexp: '^%{{ sudo_group }}'
|
||||
line: '%{{ sudo_group }} ALL=(ALL) NOPASSWD: ALL'
|
||||
validate: '/usr/sbin/visudo -cf %s'
|
||||
|
||||
- name: Copy the public SSH keys
|
||||
authorized_key:
|
||||
user: "{{ username }}"
|
||||
state: present
|
||||
key: "{{ item }}"
|
||||
with_items: "{{ auth_keys }}"
|
||||
|
||||
- name: Set the default shell
|
||||
user:
|
||||
name: "{{ username }}"
|
||||
shell: "{{ shell }}"
|
||||
|
||||
- name: Disable cron e-mail notifications
|
||||
cron:
|
||||
name: MAILTO
|
||||
user: "{{ username }}"
|
||||
env: yes
|
||||
job: ""
|
||||
1
Ansible/roles/ubuntu_autoinstall/.gitignore
vendored
Normal file
1
Ansible/roles/ubuntu_autoinstall/.gitignore
vendored
Normal file
@@ -0,0 +1 @@
|
||||
.DS_Store
|
||||
28
Ansible/roles/ubuntu_autoinstall/README.md
Normal file
28
Ansible/roles/ubuntu_autoinstall/README.md
Normal file
@@ -0,0 +1,28 @@
|
||||
# Ansible Role: Ubuntu Autoinstall
|
||||
|
||||
### This role will:
|
||||
* Download and verify (GPG and SHA256) the newest Ubuntu Server 20.04 ISO
|
||||
* Unpack the ISO and integrate the user-data file for semi-automated installation
|
||||
* Repack the ISO and (optionally) upload it to [PiKVM](https://pikvm.org/) for futher installation
|
||||
|
||||
### Special thanks to:
|
||||
* covertsh for [Ubuntu Autoinstall Generator](https://github.com/covertsh/ubuntu-autoinstall-generator) – this repo is pretty much an Ansible version of their script
|
||||
|
||||
### Example playbook:
|
||||
```
|
||||
---
|
||||
- hosts: all
|
||||
gather_facts: yes
|
||||
become: no
|
||||
|
||||
roles:
|
||||
- role: ubuntu_autoinstall
|
||||
```
|
||||
|
||||
### Variables
|
||||
* **boot_drive_serial** – the serial number of the drive where you want to install Ubuntu. You can find it out using `ls /dev/disk/by-id`. Make sure to omit the interface (e.g. **ata-** or **scsi-**).
|
||||
* **iso_arch** – Architecture of the output ISO file. `amd64` and `arm64` are supported
|
||||
|
||||
|
||||
|
||||
Other variables are more or less self-explanatory and can be found in defaults/main.yml
|
||||
39
Ansible/roles/ubuntu_autoinstall/defaults/main.yml
Normal file
39
Ansible/roles/ubuntu_autoinstall/defaults/main.yml
Normal file
@@ -0,0 +1,39 @@
|
||||
---
|
||||
locale: "en_us.UTF-8"
|
||||
|
||||
ubuntu_release_name: jammy
|
||||
|
||||
iso_arch: "amd64"
|
||||
# Valid values are: amd64, arm64
|
||||
# ppc64el and s390x haven't been tested yet
|
||||
|
||||
keyboard_layout: "us"
|
||||
|
||||
hostname: "kimchi"
|
||||
|
||||
password: "ubuntu"
|
||||
|
||||
username: "tas"
|
||||
|
||||
ssh_public_key: "ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBO3ybRI7itUDyTRUpdwVS1tZlsUGqtqwdaoLBZr4D6h72HEM0/aAUiJMAzgvyPVfqP81Z4VTwKOlnIl4/tPn+qg= thomas@pop"
|
||||
|
||||
ssh_public_key_2: "ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDpvDh3VvtAuaPx+5j4Xfc1kAkB8jV4qyyh2dEyeWfkrzNt6AB3i0snKX17XD4M26o4GKzEcv9HJWAT4o60V6w+B3nKbxWsTZLaldDmXU5zvVlrnwvyuyVbgUjWvsYafBipjJxLUPSouOPZV7HH39cw2VXWASLbe+2ULo10YQWT6kCoYx+hUvll78dP02uPhf7vplFf48WosEGRPl8RhzI5HFxRazcnt7UtO5pM0fAi/vmpy9Sqi3Hdta13y+o9ZYnDV/VGGHkyP5ArLxRKKLAUZcFoUS16R9JnQX49waGkd+WNQxNmy77m1imINy8/zXk8dfjoMFNnBcgToI+eeg3p thoma@TAS-PC"
|
||||
|
||||
pikvm_address: "pikvm.box"
|
||||
|
||||
pikvm_username: "admin"
|
||||
|
||||
pikvm_password: "admin"
|
||||
|
||||
target_dir: "{{ ansible_env.HOME }}/.local/ansible_ubuntu-autoinstall"
|
||||
|
||||
ubuntu_gpg_key: 843938DF228D22F7B3742BC0D94AA3F0EFE21092
|
||||
|
||||
enable_pikvm: false
|
||||
|
||||
enable_hwe_kernel: false
|
||||
|
||||
enable_swap_file: false
|
||||
|
||||
# boot_drive_serial: "Crucial_CT256MX100SSD1_14370D31E955"
|
||||
boot_drive_serial: "KINGSTON_SNV2S250G_50026B7784E505B4"
|
||||
14
Ansible/roles/ubuntu_autoinstall/meta/main.yml
Normal file
14
Ansible/roles/ubuntu_autoinstall/meta/main.yml
Normal file
@@ -0,0 +1,14 @@
|
||||
---
|
||||
galaxy_info:
|
||||
role_name: ubuntu_autoinstall
|
||||
author: festlandtommy
|
||||
description: Generates an Ubuntu 22.04 Server ISO with a user-data template and optionally deploys it to PiKVM
|
||||
license: WTFPL
|
||||
min_ansible_version: 2.4
|
||||
platforms:
|
||||
- name: Ubuntu
|
||||
versions:
|
||||
- jammy
|
||||
galaxy_tags:
|
||||
- system
|
||||
dependencies: []
|
||||
24
Ansible/roles/ubuntu_autoinstall/tasks/configure.yml
Normal file
24
Ansible/roles/ubuntu_autoinstall/tasks/configure.yml
Normal file
@@ -0,0 +1,24 @@
|
||||
---
|
||||
- name: Install dependencies
|
||||
become: yes
|
||||
package:
|
||||
name:
|
||||
- xorriso
|
||||
- gpg
|
||||
- curl
|
||||
state: present
|
||||
when: ansible_os_family != "Darwin"
|
||||
|
||||
- name: Install dependencies (macOS)
|
||||
package:
|
||||
name:
|
||||
- xorriso
|
||||
- gnupg
|
||||
- curl
|
||||
state: present
|
||||
when: ansible_os_family == "Darwin"
|
||||
|
||||
- name: Create the temporary directory
|
||||
file:
|
||||
path: "{{ target_dir }}"
|
||||
state: directory
|
||||
52
Ansible/roles/ubuntu_autoinstall/tasks/download_verify.yml
Normal file
52
Ansible/roles/ubuntu_autoinstall/tasks/download_verify.yml
Normal file
@@ -0,0 +1,52 @@
|
||||
---
|
||||
- name: Download the SHA256 sums
|
||||
get_url:
|
||||
url: "https://cdimage.ubuntu.com/ubuntu-server/{{ ubuntu_release_name }}/daily-live/current/SHA256SUMS"
|
||||
dest: "{{ target_dir }}"
|
||||
|
||||
- name: Get the SHA256 sum for the {{ iso_arch }} ISO
|
||||
shell:
|
||||
cmd: "grep {{ iso_arch }}.iso {{ target_dir }}/SHA256SUMS | cut -d ' ' -f1"
|
||||
changed_when: false
|
||||
register: sha256sum
|
||||
|
||||
- name: Download the latest Ubuntu Server 22.04 {{ iso_arch }} ISO
|
||||
get_url:
|
||||
url: "https://cdimage.ubuntu.com/ubuntu-server/{{ ubuntu_release_name }}/daily-live/current/{{ ubuntu_release_name }}-live-server-{{ iso_arch }}.iso"
|
||||
dest: "{{ target_dir }}/{{ ubuntu_release_name }}-live-server-{{ iso_arch }}.iso"
|
||||
checksum: "sha256:{{ sha256sum.stdout }}"
|
||||
|
||||
- name: Download the GPG keys
|
||||
get_url:
|
||||
url: "https://cdimage.ubuntu.com/ubuntu-server/{{ ubuntu_release_name }}/daily-live/current/SHA256SUMS.gpg"
|
||||
dest: "{{ target_dir }}"
|
||||
|
||||
- name: Check if dirmngr is running
|
||||
shell:
|
||||
cmd: pgrep dirmngr
|
||||
failed_when: false
|
||||
changed_when: false
|
||||
register: dirmngr_status
|
||||
|
||||
- name: Launch dirmngr if it isn't running
|
||||
shell:
|
||||
cmd: "dirmngr --daemon"
|
||||
when: dirmngr_status.rc != 0
|
||||
|
||||
- name: Import the GPG key
|
||||
shell:
|
||||
cmd: "gpg -q --no-default-keyring --keyring '{{ target_dir }}/{{ ubuntu_gpg_key }}.keyring' --keyserver 'hkp://keyserver.ubuntu.com' --recv-keys {{ ubuntu_gpg_key }}"
|
||||
creates:
|
||||
- "{{ target_dir }}/{{ ubuntu_gpg_key }}.keyring"
|
||||
- "{{ target_dir }}/{{ ubuntu_gpg_key }}.keyring~"
|
||||
ignore_errors: yes
|
||||
|
||||
- name: Verify the GPG key
|
||||
shell:
|
||||
cmd: "gpg -q --keyring '{{ target_dir }}/{{ ubuntu_gpg_key }}.keyring' --verify '{{ target_dir }}/SHA256SUMS.gpg' '{{ target_dir }}/SHA256SUMS' 2>/dev/null"
|
||||
ignore_errors: yes
|
||||
|
||||
- name: Kill dirmngr if we launched it
|
||||
shell:
|
||||
cmd: "pkill dirmngr"
|
||||
when: dirmngr_status.rc != 0
|
||||
8
Ansible/roles/ubuntu_autoinstall/tasks/extract_efi_image.sh
Executable file
8
Ansible/roles/ubuntu_autoinstall/tasks/extract_efi_image.sh
Executable file
@@ -0,0 +1,8 @@
|
||||
#!/bin/bash
|
||||
|
||||
SOURCE_ISO="$1"
|
||||
TMP_DIR="$2"
|
||||
|
||||
START_BLOCK=$(fdisk -l "${SOURCE_ISO}" | fgrep '.iso2 ' | awk '{print $2}')
|
||||
SECTORS=$(fdisk -l "${SOURCE_ISO}" | fgrep '.iso2 ' | awk '{print $4}')
|
||||
dd if="${SOURCE_ISO}" bs=512 skip="${START_BLOCK}" count="${SECTORS}" of="${TMP_DIR}/iso.efi"
|
||||
123
Ansible/roles/ubuntu_autoinstall/tasks/generate_iso.yml
Normal file
123
Ansible/roles/ubuntu_autoinstall/tasks/generate_iso.yml
Normal file
@@ -0,0 +1,123 @@
|
||||
---
|
||||
- name: Create the extraction directory
|
||||
file:
|
||||
path: "{{ target_dir }}/iso"
|
||||
state: directory
|
||||
|
||||
- name: Extract the ISO
|
||||
shell:
|
||||
cmd: "xorriso -osirrox on -indev {{ target_dir }}/{{ ubuntu_release_name }}-live-server-{{ iso_arch }}.iso -extract / {{ target_dir }}/iso"
|
||||
|
||||
- name: Extract MBR image
|
||||
shell:
|
||||
cmd: "dd if={{ target_dir }}/{{ ubuntu_release_name }}-live-server-{{ iso_arch }}.iso bs=1 count=446 of={{ target_dir }}/iso.mbr"
|
||||
|
||||
- name: Extract EFI image
|
||||
script: "./extract_efi_image.sh {{ target_dir }}/{{ ubuntu_release_name }}-live-server-{{ iso_arch }}.iso {{ target_dir }}"
|
||||
|
||||
- name: Fix extracted ISO mode
|
||||
file:
|
||||
path: "{{ target_dir }}/iso"
|
||||
mode: "u+w"
|
||||
recurse: yes
|
||||
follow: no
|
||||
|
||||
- name: Delete the [BOOT] folder
|
||||
file:
|
||||
path: "{{ target_dir }}/iso/[BOOT]"
|
||||
state: absent
|
||||
|
||||
- name: Enable HWE kernel in GRUB bootloader
|
||||
replace:
|
||||
path: "{{ item }}"
|
||||
regexp: '/casper/(vmlinuz|initrd)'
|
||||
replace: '/casper/hwe-\1'
|
||||
with_items:
|
||||
- "{{ target_dir }}/iso/boot/grub/grub.cfg"
|
||||
- "{{ target_dir }}/iso/boot/grub/loopback.cfg"
|
||||
when: enable_hwe_kernel | default(False)
|
||||
|
||||
- name: Add the autoinstall parameter to the GRUB bootloader
|
||||
replace:
|
||||
path: "{{ item }}"
|
||||
regexp: "---$"
|
||||
replace: " autoinstall ds=nocloud\\;s=/cdrom/nocloud/ ---"
|
||||
with_items:
|
||||
- "{{ target_dir }}/iso/boot/grub/grub.cfg"
|
||||
- "{{ target_dir }}/iso/boot/grub/loopback.cfg"
|
||||
|
||||
- name: Create the nocloud directory
|
||||
file:
|
||||
path: "{{ target_dir }}/iso/nocloud"
|
||||
state: directory
|
||||
|
||||
- name: Generate and install the user-data file
|
||||
template:
|
||||
src: user-data.j2
|
||||
dest: "{{ target_dir }}/iso/nocloud/user-data"
|
||||
|
||||
- name: Create an empty meta-data file
|
||||
file:
|
||||
path: "{{ target_dir }}/iso/nocloud/meta-data"
|
||||
state: touch
|
||||
modification_time: preserve
|
||||
access_time: preserve
|
||||
|
||||
- name: Calculate the new MD5 hashes
|
||||
stat:
|
||||
path: "{{ item }}"
|
||||
checksum_algorithm: md5
|
||||
with_items:
|
||||
- "{{ target_dir }}/iso/boot/grub/grub.cfg"
|
||||
- "{{ target_dir }}/iso/boot/grub/loopback.cfg"
|
||||
register: md5sums
|
||||
|
||||
- name: Write the new MD5 hash (grub.cfg)
|
||||
lineinfile:
|
||||
line: "{{ md5sums.results[0].stat.checksum }} ./boot/grub/grub.cfg"
|
||||
search_string: /boot/grub/grub.cfg
|
||||
path: "{{ target_dir }}/iso/md5sum.txt"
|
||||
|
||||
- name: Write the new MD5 hash (loopback.cfg)
|
||||
lineinfile:
|
||||
line: "{{ md5sums.results[1].stat.checksum }} ./boot/grub/loopback.cfg"
|
||||
search_string: loopback.cfg
|
||||
path: "{{ target_dir }}/iso/md5sum.txt"
|
||||
|
||||
- name: Repack the ISO (amd64)
|
||||
shell:
|
||||
cmd: "cd {{ target_dir }}/iso && \
|
||||
xorriso -as mkisofs -quiet -D -r -V ubuntu-autoinstall_{{ iso_arch }} -cache-inodes -J -l \
|
||||
-iso-level 3 \
|
||||
-partition_offset 16 \
|
||||
--grub2-mbr {{ target_dir }}/iso.mbr \
|
||||
--mbr-force-bootable \
|
||||
-append_partition 2 0xEF {{ target_dir }}/iso.efi \
|
||||
-appended_part_as_gpt \
|
||||
-c boot.catalog \
|
||||
-b boot/grub/i386-pc/eltorito.img \
|
||||
-no-emul-boot -boot-load-size 4 -boot-info-table --grub2-boot-info \
|
||||
-eltorito-alt-boot \
|
||||
-e '--interval:appended_partition_2:all::' \
|
||||
-no-emul-boot \
|
||||
-o {{ target_dir }}/ubuntu_autoinstall_{{ iso_arch }}.iso \
|
||||
."
|
||||
when: iso_arch == 'amd64'
|
||||
|
||||
- name: Repack the ISO (arm64)
|
||||
shell:
|
||||
cmd: "cd {{ target_dir }}/iso && xorriso -as mkisofs -quiet -D -r -V ubuntu-autoinstall_{{ iso_arch }} -cache-inodes -J -joliet-long -no-emul-boot -e boot/grub/efi.img -partition_cyl_align all -append_partition 2 0xef boot/grub/efi.img -no-emul-boot -o {{ target_dir }}/ubuntu_autoinstall_{{ iso_arch }}.iso ."
|
||||
when: iso_arch == 'arm64'
|
||||
|
||||
- name: Clean up
|
||||
file:
|
||||
path: "{{ item }}"
|
||||
state: absent
|
||||
with_items:
|
||||
- "{{ target_dir }}/iso"
|
||||
- "{{ target_dir }}/iso-mbr"
|
||||
- "{{ target_dir }}/iso-efi"
|
||||
|
||||
- name: Done!
|
||||
debug:
|
||||
msg: "Done! The ISO file has been generated: {{target_dir}}/ubuntu_autoinstall_{{ iso_arch }}.iso"
|
||||
13
Ansible/roles/ubuntu_autoinstall/tasks/main.yml
Normal file
13
Ansible/roles/ubuntu_autoinstall/tasks/main.yml
Normal file
@@ -0,0 +1,13 @@
|
||||
---
|
||||
- name: Configure the target system and install dependencies
|
||||
include_tasks: configure.yml
|
||||
|
||||
- name: Download and verify the ISO
|
||||
include_tasks: download_verify.yml
|
||||
|
||||
- name: Generate the ISO
|
||||
include_tasks: generate_iso.yml
|
||||
|
||||
- name: Upload the ISO to the KVM
|
||||
include_tasks: upload_kvm.yml
|
||||
when: enable_pikvm | default(False)
|
||||
49
Ansible/roles/ubuntu_autoinstall/tasks/upload_kvm.yml
Normal file
49
Ansible/roles/ubuntu_autoinstall/tasks/upload_kvm.yml
Normal file
@@ -0,0 +1,49 @@
|
||||
---
|
||||
- name: Get the file size of the ISO
|
||||
stat:
|
||||
path: "{{ target_dir }}/ubuntu_autoinstall.iso"
|
||||
register: iso
|
||||
|
||||
- name: Disconnect the current drive
|
||||
uri:
|
||||
url: "http://{{ pikvm_address }}/api/msd/set_connected?connected=0"
|
||||
method: POST
|
||||
status_code: [ 400, 200 ]
|
||||
headers:
|
||||
X-KVMD-User: "{{ pikvm_username }}"
|
||||
X-KVMD-Passwd: "{{ pikvm_password }}"
|
||||
register: response
|
||||
changed_when: response.json is not search("MsdDisconnectedError")
|
||||
|
||||
- name: Remove the previous ISO
|
||||
uri:
|
||||
url: "http://{{ pikvm_address }}/api/msd/remove?image=ubuntu_autoinstall.iso"
|
||||
status_code: [ 400, 200 ]
|
||||
method: POST
|
||||
headers:
|
||||
X-KVMD-User: "{{ pikvm_username }}"
|
||||
X-KVMD-Passwd: "{{ pikvm_password }}"
|
||||
register: response
|
||||
changed_when: response.json is not search("MsdUnknownImageError")
|
||||
|
||||
- name: Upload the ISO to PiKVM
|
||||
shell:
|
||||
cmd: "curl --location --request POST '{{ pikvm_address }}/api/msd/write' --header 'X-KVMD-User: {{ pikvm_username }}' --header 'X-KVMD-Passwd: {{ pikvm_password }}' --form 'image=ubuntu_autoinstall.iso' --form 'size={{ iso.stat.size | int }}' --form 'data=@{{ target_dir }}/ubuntu_autoinstall.iso'"
|
||||
|
||||
- name: Select the ubuntu_autoinstall ISO
|
||||
uri:
|
||||
validate_certs: no
|
||||
url: "http://{{ pikvm_address }}/api/msd/set_params?image=ubuntu_autoinstall.iso"
|
||||
method: POST
|
||||
headers:
|
||||
X-KVMD-User: "{{ pikvm_username }}"
|
||||
X-KVMD-Passwd: "{{ pikvm_password }}"
|
||||
|
||||
- name: Connect the ISO to the server
|
||||
uri:
|
||||
validate_certs: no
|
||||
url: "http://{{ pikvm_address }}/api/msd/set_connected?connected=true"
|
||||
method: POST
|
||||
headers:
|
||||
X-KVMD-User: "{{ pikvm_username }}"
|
||||
X-KVMD-Passwd: "{{ pikvm_password }}"
|
||||
39
Ansible/roles/ubuntu_autoinstall/templates/user-data.j2
Normal file
39
Ansible/roles/ubuntu_autoinstall/templates/user-data.j2
Normal file
@@ -0,0 +1,39 @@
|
||||
#cloud-config
|
||||
autoinstall:
|
||||
version: 1
|
||||
locale: {{ locale }}
|
||||
keyboard:
|
||||
layout: {{ keyboard_layout }}
|
||||
refresh-installer:
|
||||
update: yes
|
||||
identity:
|
||||
hostname: {{ hostname }}
|
||||
password: {{ password | password_hash('sha512') }}
|
||||
username: {{ username }}
|
||||
ssh:
|
||||
install-server: true
|
||||
allow-pw: false
|
||||
authorized-keys:
|
||||
- {{ ssh_public_key }}
|
||||
- {{ ssh_public_key_2 }}
|
||||
storage:
|
||||
grub:
|
||||
reorder_uefi: False
|
||||
{% if not enable_swap_file %}
|
||||
swap:
|
||||
size: 0
|
||||
{% endif %}
|
||||
config:
|
||||
- {ptable: gpt, serial: "{{ boot_drive_serial }}", preserve: false, name: '', grub_device: false, type: disk, id: bootdrive}
|
||||
|
||||
- {device: bootdrive, size: 536870912, wipe: superblock, flag: boot, number: 1, preserve: false, grub_device: true, type: partition, id: efipart}
|
||||
- {fstype: fat32, volume: efipart, preserve: false, type: format, id: efi}
|
||||
|
||||
- {device: bootdrive, size: 75000000000, wipe: superblock, flag: linux, number: 2, preserve: false, grub_device: false, type: partition, id: rootpart}
|
||||
- {fstype: ext4, volume: rootpart, preserve: false, type: format, id: root}
|
||||
|
||||
- {device: bootdrive, size: -1, wipe: superblock, flag: linux, number: 3, preserve: false, grub_device: false, type: partition, id: cachepart}
|
||||
- {fstype: ext4, volume: rootpart, preserve: false, type: format, id: root}
|
||||
|
||||
- {device: root, path: /, type: mount, id: rootmount}
|
||||
- {device: efi, path: /boot/efi, type: mount, id: efimount}
|
||||
42
Ansible/setup_home_server.yml
Normal file
42
Ansible/setup_home_server.yml
Normal file
@@ -0,0 +1,42 @@
|
||||
---
|
||||
- hosts: home
|
||||
gather_facts: no
|
||||
|
||||
pre_tasks:
|
||||
- import_tasks: tasks/ssh_juggle_port.yml
|
||||
tags:
|
||||
- port
|
||||
|
||||
# home server/nas/lab
|
||||
- hosts: kimchi
|
||||
become: yes
|
||||
|
||||
roles:
|
||||
- role: system
|
||||
tags:
|
||||
- system
|
||||
|
||||
- role: neovim
|
||||
tags:
|
||||
- neovim
|
||||
|
||||
# - role: geerlingguy.security
|
||||
# tags:
|
||||
# - security
|
||||
|
||||
# - role: geerlingguy.docker
|
||||
# tags:
|
||||
# - docker
|
||||
|
||||
# - role: chriswayg.msmtp-mailer
|
||||
# tags:
|
||||
# - msmtp
|
||||
|
||||
- name: containers
|
||||
tags:
|
||||
- containers
|
||||
|
||||
# - role: tailscale
|
||||
# when: tailscale_enabled | default(false)
|
||||
# tags:
|
||||
# - tailscale
|
||||
67
Ansible/setup_maintenance.yml
Normal file
67
Ansible/setup_maintenance.yml
Normal file
@@ -0,0 +1,67 @@
|
||||
---
|
||||
# Maintenance Setup Playbook
|
||||
# Sets up automated maintenance, backups, and monitoring
|
||||
#
|
||||
# Usage:
|
||||
# ansible-playbook -i inventory.ini setup_maintenance.yml
|
||||
# ansible-playbook -i inventory.ini setup_maintenance.yml --tags backup
|
||||
# ansible-playbook -i inventory.ini setup_maintenance.yml --check --diff
|
||||
|
||||
- name: Configure Home Server Maintenance Automation
|
||||
hosts: kimchi
|
||||
become: yes
|
||||
|
||||
roles:
|
||||
- role: maintenance
|
||||
tags:
|
||||
- maintenance
|
||||
- system
|
||||
|
||||
- role: backup
|
||||
tags:
|
||||
- backup
|
||||
- storage
|
||||
|
||||
- role: smart_monitoring
|
||||
tags:
|
||||
- smart
|
||||
- monitoring
|
||||
|
||||
post_tasks:
|
||||
- name: Check bcache status
|
||||
ansible.builtin.include_tasks: tasks/bcache_check.yml
|
||||
tags:
|
||||
- bcache
|
||||
- storage
|
||||
|
||||
- name: Display maintenance timer status
|
||||
ansible.builtin.command: systemctl list-timers k3s-maintenance.timer backup-mirror-sync.timer
|
||||
register: timer_status
|
||||
changed_when: false
|
||||
tags:
|
||||
- status
|
||||
|
||||
- name: Show timer status
|
||||
ansible.builtin.debug:
|
||||
msg: "{{ timer_status.stdout_lines }}"
|
||||
tags:
|
||||
- status
|
||||
|
||||
- name: Display SMART monitoring status
|
||||
ansible.builtin.command: systemctl status smartmontools --no-pager
|
||||
register: smart_status
|
||||
changed_when: false
|
||||
failed_when: false
|
||||
tags:
|
||||
- status
|
||||
|
||||
- name: Show SMART status
|
||||
ansible.builtin.debug:
|
||||
msg: "{{ smart_status.stdout_lines }}"
|
||||
tags:
|
||||
- status
|
||||
|
||||
handlers:
|
||||
- name: Verify all services
|
||||
ansible.builtin.debug:
|
||||
msg: "All maintenance automation configured successfully!"
|
||||
62
Ansible/tasks/bcache_check.yml
Normal file
62
Ansible/tasks/bcache_check.yml
Normal file
@@ -0,0 +1,62 @@
|
||||
---
|
||||
# Bcache health check and re-attachment tasks
|
||||
# Can be run standalone or included in maintenance playbooks
|
||||
|
||||
- name: Check bcache device exists
|
||||
ansible.builtin.stat:
|
||||
path: /dev/bcache0
|
||||
register: bcache_device
|
||||
|
||||
- name: Get bcache cache state
|
||||
ansible.builtin.shell: cat /sys/block/bcache0/bcache/state
|
||||
register: bcache_state
|
||||
when: bcache_device.stat.exists
|
||||
changed_when: false
|
||||
|
||||
- name: Display bcache state
|
||||
ansible.builtin.debug:
|
||||
msg: "Bcache state: {{ bcache_state.stdout }}"
|
||||
when: bcache_device.stat.exists
|
||||
|
||||
- name: Check if bcache cache is detached
|
||||
ansible.builtin.set_fact:
|
||||
bcache_detached: "{{ bcache_state.stdout == 'no cache' }}"
|
||||
when: bcache_device.stat.exists
|
||||
|
||||
- name: Re-attach bcache cache if detached
|
||||
ansible.builtin.shell: echo "74a7d177-65f4-4902-9fe5-e596602c28d4" > /sys/block/bcache0/bcache/attach
|
||||
when:
|
||||
- bcache_device.stat.exists
|
||||
- bcache_detached | default(false)
|
||||
register: bcache_attach
|
||||
|
||||
- name: Verify bcache cache attached
|
||||
ansible.builtin.shell: cat /sys/block/bcache0/bcache/state
|
||||
register: bcache_state_after
|
||||
when: bcache_attach is changed
|
||||
changed_when: false
|
||||
|
||||
- name: Display bcache state after re-attachment
|
||||
ansible.builtin.debug:
|
||||
msg: "Bcache state after re-attachment: {{ bcache_state_after.stdout }}"
|
||||
when: bcache_attach is changed
|
||||
|
||||
- name: Get bcache statistics
|
||||
ansible.builtin.shell: |
|
||||
hits=$(cat /sys/block/bcache0/bcache/stats_total/cache_hits)
|
||||
misses=$(cat /sys/block/bcache0/bcache/stats_total/cache_misses)
|
||||
total=$((hits + misses))
|
||||
if [ $total -gt 0 ]; then
|
||||
ratio=$(echo "scale=2; $hits / $total * 100" | bc)
|
||||
echo "Hits: $hits, Misses: $misses, Hit Ratio: ${ratio}%"
|
||||
else
|
||||
echo "No cache statistics yet"
|
||||
fi
|
||||
register: bcache_stats
|
||||
when: bcache_device.stat.exists
|
||||
changed_when: false
|
||||
|
||||
- name: Display bcache statistics
|
||||
ansible.builtin.debug:
|
||||
msg: "{{ bcache_stats.stdout }}"
|
||||
when: bcache_device.stat.exists
|
||||
70
Ansible/tasks/ssh_juggle_port.yml
Normal file
70
Ansible/tasks/ssh_juggle_port.yml
Normal file
@@ -0,0 +1,70 @@
|
||||
---
|
||||
- name: SSH Port Juggle | Try connecting via SSH
|
||||
wait_for_connection:
|
||||
timeout: 5
|
||||
ignore_errors: true
|
||||
register: _ssh_port_result
|
||||
|
||||
- name: SSH Port Juggle | Set the ansible_port to the fallback default port
|
||||
set_fact:
|
||||
ansible_ssh_port: "22"
|
||||
when:
|
||||
- _ssh_port_result is failed
|
||||
|
||||
- name: SSH Port Juggle | Try connecting again
|
||||
wait_for_connection:
|
||||
timeout: 5
|
||||
ignore_errors: true
|
||||
register: _ssh_port_default_result
|
||||
when:
|
||||
- _ssh_port_result is failed
|
||||
|
||||
|
||||
- name: SSH Port Juggle | Set the ansible_port to the fallback default port and credentials
|
||||
set_fact:
|
||||
ansible_ssh_port: "22"
|
||||
ansible_ssh_user: "pi"
|
||||
ansible_ssh_password: "raspberry"
|
||||
when:
|
||||
- _ssh_port_result is failed
|
||||
- _ssh_port_default_result is failed
|
||||
|
||||
- name: Try default credentials (for Raspberry Pi)
|
||||
wait_for_connection:
|
||||
timeout: 5
|
||||
ignore_errors: true
|
||||
register: _ssh_port_default_cred_result
|
||||
when:
|
||||
- _ssh_port_result is failed
|
||||
- _ssh_port_default_result is failed
|
||||
|
||||
- name: SSH Port Juggle | Try root
|
||||
set_fact:
|
||||
ansible_ssh_port: "22"
|
||||
ansible_ssh_user: "root"
|
||||
when:
|
||||
- _ssh_port_result is failed
|
||||
- _ssh_port_default_result is failed
|
||||
- _ssh_port_default_cred_result is failed
|
||||
|
||||
|
||||
- name: Try root
|
||||
wait_for_connection:
|
||||
timeout: 5
|
||||
ignore_errors: true
|
||||
register: _ssh_port_default_cred_result
|
||||
when:
|
||||
- _ssh_port_result is failed
|
||||
- _ssh_port_default_result is failed
|
||||
- _ssh_port_default_cred_result is failed
|
||||
|
||||
- name: SSH Port Juggle | Fail
|
||||
fail: msg="Neither the configured ansible_port {{ ansible_port }} nor the fallback port 22 were reachable"
|
||||
when:
|
||||
- _ssh_port_result is failed
|
||||
- _ssh_port_default_result is defined
|
||||
- _ssh_port_default_result is failed
|
||||
- _ssh_port_default_cred_result is defined
|
||||
- _ssh_port_default_cred_result is failed
|
||||
- _ssh_port_root_result is defined
|
||||
- _ssh_port_root_result is failed
|
||||
24
Ansible/tasks/update_k3s.yaml
Normal file
24
Ansible/tasks/update_k3s.yaml
Normal file
@@ -0,0 +1,24 @@
|
||||
---
|
||||
- name: Check if k3s binary exists
|
||||
stat: path=/usr/local/bin/k3s
|
||||
register: k3s_stat
|
||||
|
||||
- name: Backup the old k3s binary
|
||||
command: mv /usr/local/bin/k3s /usr/local/bin/k3s_before_upgrade
|
||||
when: k3s_stat.stat.exists
|
||||
|
||||
- name: Download the new k3s binary
|
||||
get_url:
|
||||
url: https://github.com/k3s-io/k3s/releases/download/{{ k3s_version }}/k3s
|
||||
checksum: sha256:https://github.com/k3s-io/k3s/releases/download/{{ k3s_version }}/sha256sum-amd64.txt
|
||||
dest: /usr/local/bin/k3s
|
||||
owner: root
|
||||
group: root
|
||||
mode: 0755
|
||||
|
||||
- name: Restart and check K3s service
|
||||
systemd:
|
||||
name: k3s
|
||||
daemon_reload: yes
|
||||
state: restarted
|
||||
enabled: yes
|
||||
8
Ansible/update_k3s.yaml
Normal file
8
Ansible/update_k3s.yaml
Normal file
@@ -0,0 +1,8 @@
|
||||
# home server/nas/lab
|
||||
- hosts: kimchi
|
||||
become: yes
|
||||
|
||||
pre_tasks:
|
||||
- import_tasks: tasks/update_k3s.yaml
|
||||
vars:
|
||||
k3s_version: v1.29.0+k3s1
|
||||
384
CLAUDE.md
Normal file
384
CLAUDE.md
Normal file
@@ -0,0 +1,384 @@
|
||||
# CLAUDE.md
|
||||
|
||||
This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository.
|
||||
|
||||
## Overview
|
||||
|
||||
This is a home server Infrastructure as Code (IAC) repository that manages a K3s Kubernetes cluster deployment with various self-hosted applications. The setup combines Ansible for system configuration and Kubernetes manifests for application deployment.
|
||||
|
||||
**Quick Links:**
|
||||
- [Storage and Backup Guide](STORAGE.md) - Detailed storage, bcache, backup, and drive health information
|
||||
- [Ansible Review](Ansible/ANSIBLE_REVIEW_2025.md) - Ansible configuration review and updates (October 2025)
|
||||
- [Ansible Maintenance Roles](Ansible/MAINTENANCE_ROLES_README.md) - Automated setup for maintenance, backups, and monitoring
|
||||
- [Certificate Management](#automated-certificate-management) - K3s certificate rotation
|
||||
- [System Maintenance](#system-maintenance) - Automated maintenance tasks
|
||||
|
||||
## Architecture
|
||||
|
||||
### Core Infrastructure
|
||||
- **K3s**: Lightweight Kubernetes distribution running on home server hardware
|
||||
- **Ansible**: System configuration and K3s cluster setup
|
||||
- **Traefik**: Ingress controller (default with K3s)
|
||||
- **Cert-manager**: TLS certificate management
|
||||
|
||||
### Application Stack
|
||||
- **Core services**: Bitwarden (password manager), Nextcloud (file storage), Home Assistant
|
||||
- **Media stack**: Jellyfin, Sonarr, Deluge, Openbooks, Jackett, Transmission
|
||||
- **Development**: Gitea (git hosting), Homarr (dashboard)
|
||||
- **Infrastructure**: Cloudflare Tunnel for external access
|
||||
|
||||
### Storage Architecture
|
||||
- **NVMe SSD**: Kingston SNV2S250G (233GB)
|
||||
- `/dev/nvme0n1p1` (512MB) → `/boot/efi`
|
||||
- `/dev/nvme0n1p2` (70GB) → `/` (root filesystem)
|
||||
- `/dev/nvme0n1p3` (163GB) → bcache cache device
|
||||
- **HDD 1**: Seagate IronWolf ST4000VN006 (4TB)
|
||||
- `/dev/sda1` → `/mnt/backup-mirror` (backup mirror)
|
||||
- **HDD 2**: Seagate IronWolf ST4000VN006 (4TB)
|
||||
- `/dev/sdb` → bcache backing device
|
||||
- Combined with NVMe cache → `/dev/bcache0` → `/mnt/bcache` (main data storage)
|
||||
|
||||
**Bcache Configuration**:
|
||||
- Cache mode: `writearound` (writes go to backing device, reads are cached)
|
||||
- Cache set UUID: `74a7d177-65f4-4902-9fe5-e596602c28d4`
|
||||
- Provides SSD-accelerated storage for Kubernetes persistent volumes
|
||||
|
||||
**Bcache Management:**
|
||||
```bash
|
||||
# Check bcache status
|
||||
cat /sys/block/bcache0/bcache/state
|
||||
|
||||
# Check cache mode
|
||||
cat /sys/block/bcache0/bcache/cache_mode
|
||||
|
||||
# View cache statistics
|
||||
cat /sys/block/bcache0/bcache/stats_total/cache_hits
|
||||
cat /sys/block/bcache0/bcache/stats_total/cache_misses
|
||||
|
||||
# If cache is detached (shows "no cache"), re-attach it:
|
||||
echo "74a7d177-65f4-4902-9fe5-e596602c28d4" | sudo tee /sys/block/bcache0/bcache/attach
|
||||
|
||||
# Verify cache attached (should show "clean" or "dirty")
|
||||
cat /sys/block/bcache0/bcache/state
|
||||
```
|
||||
|
||||
## Common Commands
|
||||
|
||||
### Ansible Operations
|
||||
|
||||
**Note**: Ansible configuration was last reviewed and updated in October 2025. See [Ansible/ANSIBLE_REVIEW_2025.md](Ansible/ANSIBLE_REVIEW_2025.md) for details.
|
||||
|
||||
**Maintenance Automation**: All automated maintenance, backups, and monitoring can now be deployed via Ansible. See [Ansible/MAINTENANCE_ROLES_README.md](Ansible/MAINTENANCE_ROLES_README.md) for details.
|
||||
|
||||
```bash
|
||||
# Setup automated maintenance, backups, and monitoring
|
||||
ansible-playbook -i Ansible/inventory.ini Ansible/setup_maintenance.yml --check --diff
|
||||
ansible-playbook -i Ansible/inventory.ini Ansible/setup_maintenance.yml
|
||||
|
||||
# Setup only specific components
|
||||
ansible-playbook -i Ansible/inventory.ini Ansible/setup_maintenance.yml --tags backup
|
||||
ansible-playbook -i Ansible/inventory.ini Ansible/setup_maintenance.yml --tags smart
|
||||
ansible-playbook -i Ansible/inventory.ini Ansible/setup_maintenance.yml --tags maintenance
|
||||
|
||||
# Deploy complete home server setup (system + K3s)
|
||||
ansible-playbook -i Ansible/inventory.ini Ansible/setup_home_server.yml --check --diff
|
||||
ansible-playbook -i Ansible/inventory.ini Ansible/setup_home_server.yml
|
||||
|
||||
# Update K3s cluster
|
||||
ansible-playbook -i Ansible/inventory.ini Ansible/update_k3s.yaml
|
||||
|
||||
# Check bcache status
|
||||
ansible-playbook -i Ansible/inventory.ini Ansible/setup_maintenance.yml --tags bcache
|
||||
```
|
||||
|
||||
### Kubernetes Operations
|
||||
```bash
|
||||
# Apply all manifests in a directory
|
||||
kubectl apply -f k8s/core/bitwarden/
|
||||
|
||||
# Deploy Nextcloud via Helm
|
||||
helm upgrade --install nextcloud ./k8s/nextcloud/
|
||||
|
||||
# Check cluster status
|
||||
kubectl get nodes
|
||||
kubectl get pods -A
|
||||
|
||||
# Access secrets (use with caution)
|
||||
./k8s/secret-dump.sh
|
||||
```
|
||||
|
||||
### Infrastructure Management
|
||||
```bash
|
||||
# Install Helm (if needed)
|
||||
./get_helm.sh
|
||||
```
|
||||
|
||||
## Key Directories
|
||||
|
||||
- `Ansible/`: System configuration, K3s deployment, user management
|
||||
- `k8s/core/`: Essential cluster services (cert-manager, bitwarden, etc.)
|
||||
- `k8s/media/`: Media server applications
|
||||
- `k8s/lab/`: Development and experimental services
|
||||
- `k8s/nextcloud/`: Helm chart for Nextcloud deployment
|
||||
- `certs/`: TLS certificates for services
|
||||
|
||||
## Inventory & Configuration
|
||||
|
||||
- **Primary server**: `kimchi` (192.168.178.55) - x86_64 K3s master
|
||||
- **Secondary**: `pi-one` (192.168.178.11) - ARM device
|
||||
- Ansible inventory: `Ansible/inventory.ini`
|
||||
- K3s config location: `/var/lib/rancher/k3s/server/manifests/`
|
||||
|
||||
## Important Notes
|
||||
|
||||
- K3s automatically deploys manifests placed in `/var/lib/rancher/k3s/server/manifests/`
|
||||
- Traefik ingress controller is pre-installed with K3s
|
||||
- SSH port configuration may vary (see ssh_juggle_port.yml)
|
||||
- Persistent volumes use local storage with specific node affinity
|
||||
- Some services have hardcoded passwords/credentials (should be moved to secrets)
|
||||
|
||||
## Certificate Management
|
||||
|
||||
K3s certificates expire and need periodic rotation. If kubectl commands fail with authentication errors like:
|
||||
```
|
||||
x509: certificate has expired or is not yet valid
|
||||
```
|
||||
|
||||
**Solution:**
|
||||
```bash
|
||||
# Rotate certificates
|
||||
sudo k3s certificate rotate
|
||||
|
||||
# Restart K3s service
|
||||
sudo systemctl restart k3s
|
||||
|
||||
# Update kubeconfig with new certificates
|
||||
sudo cp /etc/rancher/k3s/k3s.yaml ~/.kube/config
|
||||
sudo chown $USER:$USER ~/.kube/config
|
||||
```
|
||||
|
||||
**Signs of certificate expiration:**
|
||||
- kubectl commands return "the server has asked for the client to provide credentials"
|
||||
- K3s logs show x509 certificate expired errors
|
||||
- Unable to access cluster API even from the master node
|
||||
|
||||
## Automated Certificate Management
|
||||
|
||||
**Automatic rotation is now configured** using systemd timers:
|
||||
|
||||
### Scripts Available:
|
||||
```bash
|
||||
# Automated rotation script (runs monthly via systemd timer)
|
||||
./scripts/k3s-cert-rotate.sh [--force] [--dry-run]
|
||||
|
||||
# Certificate monitoring script
|
||||
./scripts/k3s-cert-check.sh
|
||||
```
|
||||
|
||||
### Manual Management:
|
||||
```bash
|
||||
# Check systemd timer status
|
||||
sudo systemctl status k3s-cert-rotation.timer
|
||||
|
||||
# See next scheduled run
|
||||
systemctl list-timers k3s-cert-rotation.timer
|
||||
|
||||
# Manual rotation (if needed)
|
||||
sudo ./scripts/k3s-cert-rotate.sh --force
|
||||
|
||||
# Check certificate status
|
||||
sudo ./scripts/k3s-cert-check.sh
|
||||
```
|
||||
|
||||
### Configuration:
|
||||
- **Automatic rotation**: Monthly via systemd timer
|
||||
- **Rotation threshold**: 30 days before expiration
|
||||
- **Backup location**: `/var/lib/rancher/k3s/server/cert-backups/`
|
||||
- **Logs**: `/var/log/k3s-cert-rotation.log`
|
||||
|
||||
The automation includes:
|
||||
- Certificate expiration checking
|
||||
- Automatic backup before rotation
|
||||
- Service restart and kubeconfig updates
|
||||
- Health verification after rotation
|
||||
- Logging and notifications
|
||||
|
||||
## Drive Health Monitoring
|
||||
|
||||
### SMART Monitoring with smartmontools
|
||||
|
||||
**Automatic SMART monitoring is configured** for all drives:
|
||||
|
||||
**Configuration**: `/etc/smartd.conf`
|
||||
|
||||
**Monitored Drives:**
|
||||
- `/dev/sda` - Seagate IronWolf 4TB (backup mirror)
|
||||
- `/dev/sdb` - Seagate IronWolf 4TB (bcache backing device)
|
||||
- `/dev/nvme0n1` - Kingston SNV2S250G 233GB (cache + system)
|
||||
|
||||
**Monitoring Features:**
|
||||
- Daily short self-tests at 2:00 AM
|
||||
- Weekly long self-tests on Saturdays at 3:00 AM
|
||||
- Temperature monitoring (warns at 45°C for HDDs, 60°C for NVMe)
|
||||
- Automatic alerts to syslog for any SMART failures
|
||||
- Tracks reallocated sectors and pending sectors
|
||||
|
||||
**Useful Commands:**
|
||||
```bash
|
||||
# Check drive health status
|
||||
sudo smartctl -H /dev/sda
|
||||
sudo smartctl -H /dev/sdb
|
||||
sudo smartctl -H /dev/nvme0n1
|
||||
|
||||
# View full SMART attributes
|
||||
sudo smartctl -a /dev/sda
|
||||
|
||||
# Check service status
|
||||
systemctl status smartmontools
|
||||
|
||||
# View SMART logs
|
||||
sudo journalctl -u smartmontools
|
||||
```
|
||||
|
||||
**Monitored Drives** (continuous):
|
||||
- `/dev/sdb`: ~20,000 power-on hours, 42°C, 0 reallocated sectors ✓
|
||||
- `/dev/nvme0n1`: 4% wear level, 58°C, 100% spare available ✓
|
||||
|
||||
**Backup Drive** (`/dev/sda`):
|
||||
- **Not continuously monitored** - spins down after 10 minutes to save energy
|
||||
- Health checked automatically during weekly backup runs
|
||||
- Expected stats: ~18,000 power-on hours, 37°C, 0 reallocated sectors ✓
|
||||
- Power savings: ~35-50 kWh/year
|
||||
|
||||
## Networking
|
||||
|
||||
- Cloudflare Tunnel provides external access
|
||||
- Gitea uses custom SSH port 55522
|
||||
- Internal cluster networking via Traefik ingress
|
||||
- TLS termination handled by cert-manager + Let's Encrypt
|
||||
|
||||
## System Maintenance
|
||||
|
||||
### Automated Tasks Summary
|
||||
|
||||
The home server has three automated maintenance systems running:
|
||||
|
||||
| Task | Schedule | Purpose | Log Location |
|
||||
|------|----------|---------|--------------|
|
||||
| **K3s Certificate Rotation** | Monthly (1st of month, 12:39 AM) | Rotates K3s certificates if expiring within 30 days | `/var/log/k3s-cert-rotation.log` |
|
||||
| **System Maintenance** | Quarterly (Jan/Apr/Jul/Oct 1, 3:00 AM) | Prunes images, cleans logs, runs apt cleanup | `/var/log/k3s-maintenance.log` |
|
||||
| **Backup Mirror Sync** | Weekly (Sundays, 2:00 AM) | Syncs `/mnt/bcache` to `/mnt/backup-mirror` | `/var/log/backup-mirror-sync.log` |
|
||||
| **SMART Self-Tests** | Daily short (2:00 AM), Weekly long (Sat 3:00 AM) | Tests drive health | `journalctl -u smartmontools` |
|
||||
|
||||
**Check all timers:**
|
||||
```bash
|
||||
systemctl list-timers
|
||||
```
|
||||
|
||||
### Automated Quarterly Maintenance
|
||||
|
||||
**Automatic maintenance is now configured** using systemd timers:
|
||||
|
||||
**Scripts Available:**
|
||||
```bash
|
||||
# Quarterly maintenance script (runs Jan 1, Apr 1, Jul 1, Oct 1 at 3:00 AM)
|
||||
/usr/local/bin/k3s-maintenance.sh
|
||||
|
||||
# Check maintenance timer status
|
||||
systemctl list-timers k3s-maintenance.timer
|
||||
systemctl status k3s-maintenance.timer
|
||||
```
|
||||
|
||||
**What it does:**
|
||||
- Prunes unused container images
|
||||
- Cleans journal logs (keeps 30 days)
|
||||
- Runs apt autoremove and autoclean
|
||||
- Logs to `/var/log/k3s-maintenance.log`
|
||||
|
||||
**Configuration:**
|
||||
- **Schedule**: Quarterly (January 1, April 1, July 1, October 1 at 3:00 AM)
|
||||
- **Script location**: `/usr/local/bin/k3s-maintenance.sh`
|
||||
- **Service files**: `/etc/systemd/system/k3s-maintenance.{service,timer}`
|
||||
- **Log location**: `/var/log/k3s-maintenance.log`
|
||||
|
||||
### Manual Maintenance Tasks
|
||||
|
||||
**System Updates** (recommended monthly):
|
||||
```bash
|
||||
sudo apt update && sudo apt upgrade -y
|
||||
sudo apt autoremove -y
|
||||
```
|
||||
|
||||
**Note**: The old Kubernetes APT repository (`apt.kubernetes.io`) has been removed as it was deprecated. K3s provides its own `kubectl` via symlink at `/usr/local/bin/kubectl -> /usr/local/bin/k3s`.
|
||||
|
||||
**Disk Cleanup** (as needed):
|
||||
```bash
|
||||
# Clean old journal logs
|
||||
sudo journalctl --vacuum-time=30d
|
||||
|
||||
# Prune container images
|
||||
sudo crictl rmi --prune
|
||||
|
||||
# Check disk usage
|
||||
df -h /
|
||||
```
|
||||
|
||||
**Important Locations:**
|
||||
- Root partition: `/dev/nvme0n1p2` (69GB total)
|
||||
- Data storage: `/mnt/bcache` (3.6TB bcache)
|
||||
- Backup mirror: `/mnt/backup-mirror` (3.6TB)
|
||||
- K3s data: `/var/lib/rancher/k3s/`
|
||||
- Journal logs: `/var/log/journal/`
|
||||
|
||||
## Backup System
|
||||
|
||||
### Automated Weekly Backups
|
||||
|
||||
**Automatic backups are configured** using rsync and systemd timers:
|
||||
|
||||
**Scripts Available:**
|
||||
```bash
|
||||
# Weekly backup script (runs Sundays at 2:00 AM)
|
||||
/usr/local/bin/backup-mirror-sync.sh
|
||||
|
||||
# Check backup timer status
|
||||
systemctl list-timers backup-mirror-sync.timer
|
||||
systemctl status backup-mirror-sync.timer
|
||||
|
||||
# View backup logs
|
||||
sudo tail -f /var/log/backup-mirror-sync.log
|
||||
|
||||
# Run manual backup
|
||||
sudo /usr/local/bin/backup-mirror-sync.sh
|
||||
```
|
||||
|
||||
**Configuration:**
|
||||
- **Source**: `/mnt/bcache/` (main data storage with bcache)
|
||||
- **Destination**: `/mnt/backup-mirror/` (4TB mirror on `/dev/sda1`)
|
||||
- **Schedule**: Weekly on Sundays at 2:00 AM
|
||||
- **Method**: Rsync with incremental sync and deletion of removed files
|
||||
- **Script location**: `/usr/local/bin/backup-mirror-sync.sh`
|
||||
- **Service files**: `/etc/systemd/system/backup-mirror-sync.{service,timer}`
|
||||
- **Log location**: `/var/log/backup-mirror-sync.log`
|
||||
|
||||
**What it backs up:**
|
||||
- All Kubernetes persistent volume data
|
||||
- Nextcloud files (2TB)
|
||||
- Jellyfin media library
|
||||
- Gitea repositories
|
||||
- Bitwarden data
|
||||
- Home Assistant configuration
|
||||
- All other application data on `/mnt/bcache`
|
||||
|
||||
**Restore Process:**
|
||||
In case of main drive failure, data can be restored from `/mnt/backup-mirror`:
|
||||
```bash
|
||||
# Verify backup mount
|
||||
mountpoint /mnt/backup-mirror
|
||||
|
||||
# Restore all data (if bcache is rebuilt)
|
||||
sudo rsync -avh --delete /mnt/backup-mirror/ /mnt/bcache/
|
||||
|
||||
# Or restore specific directories
|
||||
sudo rsync -avh /mnt/backup-mirror/nextcloud/ /mnt/bcache/nextcloud/
|
||||
```
|
||||
274
MONITORING.md
Normal file
274
MONITORING.md
Normal file
@@ -0,0 +1,274 @@
|
||||
# Monitoring Stack Resource Analysis
|
||||
|
||||
**Date**: October 23, 2025
|
||||
**System**: kimchi homelab server
|
||||
|
||||
## Current System Status
|
||||
|
||||
**System Specifications:**
|
||||
- **CPU**: 4 cores
|
||||
- **Memory**: 7.6 GB total
|
||||
- **Root Disk**: 69 GB NVMe (`/dev/nvme0n1p2`)
|
||||
- **Data Storage**: 3.6 TB bcache (`/mnt/bcache`)
|
||||
|
||||
**Current Usage:**
|
||||
- **Load**: 0.52 (13% CPU on 4 cores)
|
||||
- **Memory**: 3.3 GB / 7.6 GB used (43%)
|
||||
- **Available**: 3.8 GB
|
||||
- **Disk**: 47 GB / 69 GB used (72% on root)
|
||||
- **Running pods**: 29 total
|
||||
|
||||
**Top Memory Consumers:**
|
||||
- K3s server: 687 MB (8.6%)
|
||||
- Jellyfin: 458 MB (5.7%)
|
||||
- MariaDB (Nextcloud): 330 MB (4.1%)
|
||||
- Home Assistant: 306 MB (3.8%)
|
||||
|
||||
## Prometheus + Grafana Resource Impact
|
||||
|
||||
For a **minimal monitoring stack** in this homelab setup:
|
||||
|
||||
### Expected Resource Usage:
|
||||
|
||||
| Component | Memory | CPU | Notes |
|
||||
|-----------|--------|-----|-------|
|
||||
| **Prometheus** | 400-600 MB | 200-400m (5-10%) | Main metrics database |
|
||||
| **Grafana** | 150-250 MB | 100-200m (2-5%) | Visualization UI |
|
||||
| **Node Exporter** | 20-50 MB | 50-100m (1-2%) | Per-node metrics |
|
||||
| **kube-state-metrics** | 50-100 MB | 50-100m (1-2%) | K8s cluster metrics |
|
||||
| **AlertManager** (optional) | 50-100 MB | 50m (<1%) | Alert routing |
|
||||
| **Total (minimal)** | **~700-1100 MB** | **~450-800m (11-20%)** | |
|
||||
|
||||
### Impact on System:
|
||||
|
||||
**CPU Load Increase:**
|
||||
- Current: 13% (0.52 load average)
|
||||
- After monitoring: **24-33%** (0.96-1.32 load average)
|
||||
- **Estimated increase: +11-20%** (well within headroom)
|
||||
|
||||
**Memory Impact:**
|
||||
- Current: 3.3 GB used / 3.8 GB available
|
||||
- After monitoring: 4.0-4.4 GB used / 2.7-3.1 GB available
|
||||
- **Estimated increase: +700-1100 MB** (manageable, but less buffer)
|
||||
|
||||
**Disk Impact:**
|
||||
- Prometheus data: **2-5 GB** for 15-day retention with ~30 pods
|
||||
- Root partition: Already at 72% (47 GB used of 69 GB)
|
||||
- **Recommendation**: Store Prometheus data on `/mnt/bcache` instead of root
|
||||
|
||||
## Recommended Configuration
|
||||
|
||||
### Minimal kube-prometheus-stack Setup
|
||||
|
||||
**Helm chart**: `prometheus-community/kube-prometheus-stack`
|
||||
|
||||
**values.yaml** (optimized for homelab):
|
||||
```yaml
|
||||
# Prometheus configuration
|
||||
prometheus:
|
||||
prometheusSpec:
|
||||
retention: 15d # 15 days of metrics
|
||||
resources:
|
||||
requests:
|
||||
memory: 512Mi
|
||||
cpu: 250m
|
||||
limits:
|
||||
memory: 1Gi
|
||||
cpu: 500m
|
||||
storageSpec:
|
||||
volumeClaimTemplate:
|
||||
spec:
|
||||
storageClassName: local-path # Uses /mnt/bcache
|
||||
accessModes: ["ReadWriteOnce"]
|
||||
resources:
|
||||
requests:
|
||||
storage: 5Gi
|
||||
|
||||
# Grafana configuration
|
||||
grafana:
|
||||
resources:
|
||||
requests:
|
||||
memory: 128Mi
|
||||
cpu: 100m
|
||||
limits:
|
||||
memory: 256Mi
|
||||
cpu: 200m
|
||||
persistence:
|
||||
enabled: true
|
||||
storageClassName: local-path
|
||||
size: 1Gi
|
||||
|
||||
# Node exporter (per-node metrics)
|
||||
prometheus-node-exporter:
|
||||
resources:
|
||||
requests:
|
||||
memory: 30Mi
|
||||
cpu: 50m
|
||||
limits:
|
||||
memory: 50Mi
|
||||
cpu: 100m
|
||||
|
||||
# Kube-state-metrics (cluster metrics)
|
||||
kube-state-metrics:
|
||||
resources:
|
||||
requests:
|
||||
memory: 64Mi
|
||||
cpu: 50m
|
||||
limits:
|
||||
memory: 128Mi
|
||||
cpu: 100m
|
||||
|
||||
# AlertManager (optional - disable if not needed)
|
||||
alertmanager:
|
||||
enabled: false # Can enable later if needed
|
||||
```
|
||||
|
||||
### Installation Commands
|
||||
|
||||
```bash
|
||||
# Add Prometheus community Helm repo
|
||||
helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
|
||||
helm repo update
|
||||
|
||||
# Create monitoring namespace
|
||||
kubectl create namespace monitoring
|
||||
|
||||
# Install with custom values
|
||||
helm install kube-prometheus-stack prometheus-community/kube-prometheus-stack \
|
||||
-n monitoring \
|
||||
-f values.yaml
|
||||
|
||||
# Check installation
|
||||
kubectl get pods -n monitoring
|
||||
kubectl get svc -n monitoring
|
||||
|
||||
# Access Grafana (port-forward)
|
||||
kubectl port-forward -n monitoring svc/kube-prometheus-stack-grafana 3000:80
|
||||
|
||||
# Default Grafana credentials
|
||||
# Username: admin
|
||||
# Password: prom-operator (check with: kubectl get secret -n monitoring kube-prometheus-stack-grafana -o jsonpath="{.data.admin-password}" | base64 --decode)
|
||||
```
|
||||
|
||||
## What You'll Get
|
||||
|
||||
**Features:**
|
||||
- Real-time CPU/memory/disk metrics for all pods and nodes
|
||||
- Historical data for 15 days
|
||||
- Pre-built dashboards for Kubernetes cluster overview
|
||||
- Pod resource usage tracking
|
||||
- Node health monitoring
|
||||
- Ability to troubleshoot performance issues
|
||||
- Optional alert notifications
|
||||
|
||||
**Useful Dashboards:**
|
||||
- Kubernetes Cluster Overview (ID: 315)
|
||||
- Kubernetes Pods Resource Usage (ID: 6336)
|
||||
- Node Exporter Full (ID: 1860)
|
||||
- K8s Cluster RAM and CPU Utilization (ID: 16734)
|
||||
|
||||
## Alternatives to Consider
|
||||
|
||||
### If Resources Are Tight:
|
||||
|
||||
1. **Metrics Server Only**
|
||||
- Resource usage: ~50 MB memory, minimal CPU
|
||||
- Provides: `kubectl top nodes` and `kubectl top pods` commands
|
||||
- No historical data, no dashboards
|
||||
```bash
|
||||
kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml
|
||||
```
|
||||
|
||||
2. **Netdata**
|
||||
- Resource usage: ~100-200 MB total
|
||||
- Lighter weight, simpler setup
|
||||
- Good for single-node clusters
|
||||
- Built-in web UI
|
||||
|
||||
3. **Prometheus + Remote Write**
|
||||
- Run Prometheus locally but send metrics to external Grafana Cloud
|
||||
- Free tier available (10k series, 14-day retention)
|
||||
- Saves local resources
|
||||
|
||||
## Monitoring Best Practices
|
||||
|
||||
### Resource Tuning:
|
||||
- Start with conservative limits and increase if needed
|
||||
- Monitor Prometheus memory usage - it grows with number of metrics
|
||||
- Use metric relabeling to drop unnecessary metrics
|
||||
- Adjust retention period based on actual needs
|
||||
|
||||
### Storage Considerations:
|
||||
- Prometheus needs fast I/O - bcache is ideal
|
||||
- Plan for ~300-500 MB per day of metrics with 30 pods
|
||||
- Enable persistent volumes to survive pod restarts
|
||||
|
||||
### Query Optimization:
|
||||
- Use recording rules for frequently-used queries
|
||||
- Avoid long time ranges in dashboards
|
||||
- Use downsampling for historical data
|
||||
|
||||
## Prometheus Metrics Retention Calculation
|
||||
|
||||
**Formula**: Storage = Retention × Ingestion Rate × Compression Factor
|
||||
|
||||
For this cluster:
|
||||
- ~30 pods × ~1000 metrics per pod = 30k time series
|
||||
- Sample every 15s = 5760 samples/day per series
|
||||
- Compressed: ~1-2 bytes per sample
|
||||
- 15-day retention: **~2.5-5 GB**
|
||||
|
||||
## Useful Prometheus Queries
|
||||
|
||||
### CPU Usage:
|
||||
```promql
|
||||
# CPU usage by pod
|
||||
sum(rate(container_cpu_usage_seconds_total{namespace!=""}[5m])) by (pod, namespace)
|
||||
|
||||
# Node CPU usage
|
||||
100 - (avg by (instance) (irate(node_cpu_seconds_total{mode="idle"}[5m])) * 100)
|
||||
```
|
||||
|
||||
### Memory Usage:
|
||||
```promql
|
||||
# Memory usage by pod
|
||||
sum(container_memory_working_set_bytes{namespace!=""}) by (pod, namespace)
|
||||
|
||||
# Memory usage percentage
|
||||
(node_memory_MemTotal_bytes - node_memory_MemAvailable_bytes) / node_memory_MemTotal_bytes * 100
|
||||
```
|
||||
|
||||
### Disk Usage:
|
||||
```promql
|
||||
# Disk usage by mountpoint
|
||||
(node_filesystem_size_bytes - node_filesystem_avail_bytes) / node_filesystem_size_bytes * 100
|
||||
|
||||
# Bcache hit rate
|
||||
rate(bcache_cache_hits_total[5m]) / (rate(bcache_cache_hits_total[5m]) + rate(bcache_cache_misses_total[5m]))
|
||||
```
|
||||
|
||||
## Bottom Line
|
||||
|
||||
**Verdict**: Yes, you can run Prometheus + Grafana with current resources.
|
||||
|
||||
**Impact Summary:**
|
||||
- CPU load: **13% → 24-33%** ✓ Acceptable
|
||||
- Memory: **43% → 53-58%** ✓ Acceptable (but less buffer)
|
||||
- Disk: **Need to use /mnt/bcache** ⚠️ Root partition too full
|
||||
|
||||
**Critical Requirement:**
|
||||
- Ensure Prometheus stores data on `/mnt/bcache` using `local-path` storage class
|
||||
- Do NOT store on root partition (already at 72%)
|
||||
|
||||
**Next Steps:**
|
||||
1. Create `values.yaml` with resource limits above
|
||||
2. Install kube-prometheus-stack via Helm
|
||||
3. Monitor actual resource usage for 1 week
|
||||
4. Tune retention period and limits as needed
|
||||
5. Set up ingress for Grafana access (optional)
|
||||
|
||||
## References
|
||||
|
||||
- [kube-prometheus-stack Helm Chart](https://github.com/prometheus-community/helm-charts/tree/main/charts/kube-prometheus-stack)
|
||||
- [Prometheus Operator Documentation](https://prometheus-operator.dev/)
|
||||
- [Grafana Dashboard Directory](https://grafana.com/grafana/dashboards/)
|
||||
17
README.md
Normal file
17
README.md
Normal file
@@ -0,0 +1,17 @@
|
||||
# Notes
|
||||
|
||||
k3s deploys a traefik ingress controller by default. See traefik - kube-system.
|
||||
|
||||
Any Kubernetes manifests found in `/var/lib/rancher/k3s/server/` manifests will automatically be deployed to K3s in a manner similar to kubectl apply. Manifests deployed in this manner are managed as AddOn custom resources, and can be viewed by running kubectl get addon -A. You will find AddOns for packaged components such as CoreDNS, Local-Storage, Traefik, etc. AddOns are created automatically by the deploy controller, and are named based on their filename in the manifests directory.
|
||||
|
||||
- Currently there is an additional port exposed for gitea ssh (55522)
|
||||
|
||||
## TODO
|
||||
|
||||
### ssh for gitea
|
||||
https://inlets.dev/blog/2023/01/27/self-hosting-gitea-kubernetes.html
|
||||
|
||||
|
||||
### user group for servarr (prowlarr, radarr, sonarr, lidarr)
|
||||
https://wiki.servarr.com/docker-guide
|
||||
|
||||
81
SECURITY.md
Normal file
81
SECURITY.md
Normal file
@@ -0,0 +1,81 @@
|
||||
# Security Review - March 2026
|
||||
|
||||
## Web-Facing Attack Surface
|
||||
|
||||
### CRITICAL
|
||||
|
||||
#### 1. Plaintext credentials committed to git
|
||||
- Bitwarden admin token, SMTP passwords, DB passwords, Redis password, and system user password are all in plaintext
|
||||
- Files: `k8s/core/bitwarden/configmaps.yml`, `k8s/nextcloud/values.yml`, `Ansible/roles/system/defaults/main.yml`
|
||||
- **Fix**: Use Kubernetes Secrets with sealed-secrets or external-secrets-operator. Use Ansible Vault for Ansible vars.
|
||||
|
||||
#### 2. Cloudflare Tunnel disables TLS verification
|
||||
- `noTLSVerify: true` in `k8s/core/cloudflare-tunnel/cloudflared.yml` — allows MITM between Cloudflare and your cluster
|
||||
- Internal traffic flows over plain HTTP (`http://192.168.178.55:80`)
|
||||
|
||||
#### ~~3. Bitwarden public signups enabled~~ FIXED
|
||||
- ~~`SIGNUPS_ALLOWED: "true"` — anyone on the internet can create an account on your password manager~~
|
||||
- Fixed 2026-03-14: Set `SIGNUPS_ALLOWED: "false"`
|
||||
|
||||
#### 4. No authentication middleware on any public ingress
|
||||
- Jellyfin, Gitea, JNR-Web are accessible without any auth gate
|
||||
- No OAuth2 proxy, no basic auth, no SSO configured
|
||||
|
||||
### HIGH
|
||||
|
||||
#### ~~5. Staging TLS certificates on public services~~ FIXED
|
||||
- ~~Bitwarden, Gitea, Jellyfin, JNR-Web all use `letsencrypt-staging`~~
|
||||
- Fixed 2026-03-14: All public services switched to `letsencrypt-prod` with DNS01 solver via Cloudflare
|
||||
- Also fixed typo `letsencrpyt-staging` in Gitea ingress
|
||||
- Cloudflare API token stored as K8s secret `cloudflare-api-token` in `cert-manager` namespace
|
||||
|
||||
#### ~~6. No security headers~~ FIXED
|
||||
- ~~Missing `Strict-Transport-Security`, `X-Frame-Options`, `X-Content-Type-Options` on all ingress routes~~
|
||||
- Fixed 2026-03-14: Traefik `security-headers` middleware deployed in `default` namespace
|
||||
- Headers: HSTS (7 days, no preload), X-Frame-Options SAMEORIGIN, X-Content-Type-Options nosniff, XSS filter, referrer policy, permissions policy, strips Server/X-Powered-By
|
||||
|
||||
#### ~~7. No rate limiting~~ FIXED
|
||||
- ~~No brute-force protection on any login endpoint~~
|
||||
- Fixed 2026-03-14: Traefik `rate-limit` middleware (100 req/min, 200 burst) applied to all public ingresses
|
||||
|
||||
#### ~~Sonarr publicly accessible~~ FIXED
|
||||
- Fixed 2026-03-14: Removed `sonarr.schick-web.site` from ingress, now internal-only (`sonarr.kimchi`)
|
||||
|
||||
#### 8. Privileged containers (LOW practical risk)
|
||||
- Home Assistant: `privileged: true` + `hostNetwork: true` — required for LAN device discovery (mDNS, Zigbee). Not exposed publicly (no `.schick-web.site` ingress), so exploitation requires an attacker already on the LAN.
|
||||
- Gitea Runner: privileged Docker-in-Docker — standard for CI runners that build containers. Risk limited to compromised Gitea accounts pushing malicious workflows; public fork CI triggers are disabled by default.
|
||||
- **Verdict**: Acceptable tradeoffs for a home server. Both flags serve functional purposes, not oversight.
|
||||
|
||||
#### 9. No network policies
|
||||
- Any compromised pod can reach every other pod (lateral movement)
|
||||
|
||||
### MEDIUM
|
||||
|
||||
#### 10. No container hardening
|
||||
- No `runAsNonRoot`, no `readOnlyRootFilesystem`, no capability drops on most workloads
|
||||
|
||||
#### 11. Minimal RBAC
|
||||
- Only Bitwarden has RBAC defined; other services use default service accounts
|
||||
|
||||
## Action Plan
|
||||
|
||||
### Done
|
||||
- [x] Set `SIGNUPS_ALLOWED: "false"` in Bitwarden configmap
|
||||
- [x] Switch all cert-manager issuers from `letsencrypt-staging` to `letsencrypt-prod` (DNS01 via Cloudflare)
|
||||
- [x] Fix issuer typo in Gitea ingress
|
||||
- [x] Add Traefik security headers middleware to all public ingresses
|
||||
- [x] Add rate-limiting middleware on all public ingresses
|
||||
- [x] Remove Sonarr from public access
|
||||
|
||||
### Do now
|
||||
- [ ] Rotate all credentials that are committed in plaintext, then move them to Kubernetes Secrets
|
||||
|
||||
### Do soon (days)
|
||||
- [ ] Set `noTLSVerify: false` on the Cloudflare tunnel (requires valid internal certs)
|
||||
- [ ] Put an auth proxy (e.g., Authelia or OAuth2-proxy) in front of services that lack built-in auth
|
||||
|
||||
### Do when possible (weeks)
|
||||
- [ ] Add NetworkPolicies to restrict pod-to-pod traffic
|
||||
- [ ] ~~Remove `privileged: true` from Home Assistant and Gitea runner~~ Accepted risk — both need privileged for functional reasons and are not publicly exposed
|
||||
- [ ] Add `securityContext` (non-root, read-only root FS, drop all capabilities) across workloads
|
||||
- [ ] Adopt sealed-secrets or external-secrets-operator for proper secret management
|
||||
241
STORAGE.md
Normal file
241
STORAGE.md
Normal file
@@ -0,0 +1,241 @@
|
||||
# Storage and Backup Guide
|
||||
|
||||
## Storage Overview
|
||||
|
||||
This home server uses a bcache configuration for performance with redundant backup.
|
||||
|
||||
### Physical Storage
|
||||
|
||||
| Device | Type | Capacity | Mount Point | Purpose |
|
||||
|--------|------|----------|-------------|---------|
|
||||
| `/dev/nvme0n1p2` | NVMe SSD | 70GB | `/` | Root filesystem |
|
||||
| `/dev/nvme0n1p3` | NVMe SSD | 163GB | - | Bcache cache device |
|
||||
| `/dev/sdb` | HDD (IronWolf) | 4TB | - | Bcache backing device |
|
||||
| `/dev/bcache0` | Bcache | 3.6TB | `/mnt/bcache` | Main data storage (HDD + NVMe cache) |
|
||||
| `/dev/sda1` | HDD (IronWolf) | 3.6TB | `/mnt/backup-mirror` | Weekly backup mirror |
|
||||
|
||||
### Data Layout
|
||||
|
||||
```
|
||||
/mnt/bcache/ # Main storage (933GB used)
|
||||
├── nextcloud/ # Nextcloud files (~2TB)
|
||||
├── jellyfin/ # Media library
|
||||
├── git/ # Gitea repositories
|
||||
├── data/ # General data
|
||||
│ └── media/ # Media files
|
||||
├── bitwarden/ # Password vault data
|
||||
├── home-assistant/ # Home automation config
|
||||
└── [other services]/
|
||||
|
||||
/mnt/backup-mirror/ # Mirror backup (synced weekly)
|
||||
└── [same structure as above]
|
||||
```
|
||||
|
||||
## Bcache Status
|
||||
|
||||
### Check Cache Status
|
||||
```bash
|
||||
# Should show "clean" or "dirty" (NOT "no cache")
|
||||
cat /sys/block/bcache0/bcache/state
|
||||
|
||||
# View cache performance
|
||||
cat /sys/block/bcache0/bcache/stats_total/cache_hits
|
||||
cat /sys/block/bcache0/bcache/stats_total/cache_misses
|
||||
```
|
||||
|
||||
### Re-attach Detached Cache
|
||||
If bcache state shows "no cache":
|
||||
```bash
|
||||
echo "74a7d177-65f4-4902-9fe5-e596602c28d4" | sudo tee /sys/block/bcache0/bcache/attach
|
||||
cat /sys/block/bcache0/bcache/state # Should now show "clean"
|
||||
```
|
||||
|
||||
## Backup System
|
||||
|
||||
### Automated Backups
|
||||
- **Schedule**: Every Sunday at 2:00 AM
|
||||
- **Source**: `/mnt/bcache/`
|
||||
- **Destination**: `/mnt/backup-mirror/`
|
||||
- **Method**: Rsync incremental sync
|
||||
|
||||
### Manual Backup
|
||||
```bash
|
||||
# Run backup manually
|
||||
sudo /usr/local/bin/backup-mirror-sync.sh
|
||||
|
||||
# Monitor backup progress
|
||||
sudo tail -f /var/log/backup-mirror-sync.log
|
||||
|
||||
# Check backup status
|
||||
systemctl status backup-mirror-sync.service
|
||||
systemctl list-timers backup-mirror-sync.timer
|
||||
```
|
||||
|
||||
### Restore from Backup
|
||||
|
||||
#### Full Restore
|
||||
```bash
|
||||
# Ensure backup drive is mounted
|
||||
mountpoint /mnt/backup-mirror
|
||||
|
||||
# Restore everything
|
||||
sudo rsync -avh --delete /mnt/backup-mirror/ /mnt/bcache/
|
||||
```
|
||||
|
||||
#### Selective Restore
|
||||
```bash
|
||||
# Restore specific service
|
||||
sudo rsync -avh /mnt/backup-mirror/nextcloud/ /mnt/bcache/nextcloud/
|
||||
|
||||
# Restore with dry-run first
|
||||
sudo rsync -avhn /mnt/backup-mirror/gitea/ /mnt/bcache/gitea/
|
||||
```
|
||||
|
||||
## Drive Health Monitoring
|
||||
|
||||
### Check Drive Health
|
||||
|
||||
**Note**: `/dev/sda` (backup mirror) spins down after 10 minutes to save energy. Accessing it will wake the drive (takes ~10 seconds).
|
||||
|
||||
```bash
|
||||
# Quick health check
|
||||
sudo smartctl -H /dev/sda # Backup drive (may spin up if in standby)
|
||||
sudo smartctl -H /dev/sdb # Main storage drive
|
||||
sudo smartctl -H /dev/nvme0n1 # Cache/system drive
|
||||
|
||||
# Detailed SMART info
|
||||
sudo smartctl -a /dev/sda
|
||||
|
||||
# Check if backup drive is spun down
|
||||
sudo hdparm -C /dev/sda
|
||||
```
|
||||
|
||||
### View Drive Temperature
|
||||
```bash
|
||||
# HDDs
|
||||
sudo smartctl -a /dev/sda | grep Temperature
|
||||
sudo smartctl -a /dev/sdb | grep Temperature
|
||||
|
||||
# NVMe
|
||||
sudo smartctl -a /dev/nvme0n1 | grep Temperature
|
||||
```
|
||||
|
||||
### SMART Test History
|
||||
```bash
|
||||
# View recent self-tests
|
||||
sudo smartctl -l selftest /dev/sda
|
||||
|
||||
# Monitor smartd service
|
||||
systemctl status smartmontools
|
||||
sudo journalctl -u smartmontools -n 50
|
||||
```
|
||||
|
||||
## Disk Space Management
|
||||
|
||||
### Check Usage
|
||||
```bash
|
||||
# Overall usage
|
||||
df -h
|
||||
|
||||
# Specific mounts
|
||||
df -h /
|
||||
df -h /mnt/bcache
|
||||
df -h /mnt/backup-mirror
|
||||
|
||||
# Find large directories
|
||||
du -sh /mnt/bcache/*
|
||||
du -sh /mnt/backup-mirror/*
|
||||
```
|
||||
|
||||
### Cleanup Commands
|
||||
```bash
|
||||
# Clean journal logs (keeps 30 days)
|
||||
sudo journalctl --vacuum-time=30d
|
||||
|
||||
# Prune unused container images
|
||||
sudo crictl rmi --prune
|
||||
|
||||
# Remove old packages
|
||||
sudo apt autoremove -y
|
||||
sudo apt autoclean -y
|
||||
```
|
||||
|
||||
## Emergency Procedures
|
||||
|
||||
### If bcache drive fails (/dev/sdb)
|
||||
1. Boot into recovery mode
|
||||
2. Mount backup drive: `sudo mount /dev/sda1 /mnt/backup-mirror`
|
||||
3. Replace failed drive
|
||||
4. Rebuild bcache configuration (see bcache docs)
|
||||
5. Restore data: `sudo rsync -avh /mnt/backup-mirror/ /mnt/bcache/`
|
||||
|
||||
### If backup drive fails (/dev/sda)
|
||||
1. Replace drive
|
||||
2. Partition: `sudo parted /dev/sda mklabel gpt && sudo parted /dev/sda mkpart primary ext4 0% 100%`
|
||||
3. Format: `sudo mkfs.ext4 -L backup-mirror /dev/sda1`
|
||||
4. Update fstab if UUID changed
|
||||
5. Run manual backup: `sudo /usr/local/bin/backup-mirror-sync.sh`
|
||||
|
||||
### If cache drive fails (/dev/nvme0n1p3)
|
||||
1. Data is safe on backing device (`/dev/sdb`)
|
||||
2. Replace NVMe drive
|
||||
3. Recreate cache device (see bcache docs)
|
||||
4. Re-attach to bcache set
|
||||
|
||||
## Maintenance Logs
|
||||
|
||||
All automated maintenance logs are located in `/var/log/`:
|
||||
|
||||
- `/var/log/backup-mirror-sync.log` - Backup operations
|
||||
- `/var/log/k3s-maintenance.log` - System maintenance
|
||||
- `/var/log/k3s-cert-rotation.log` - Certificate rotation
|
||||
- `journalctl -u smartmontools` - SMART monitoring
|
||||
|
||||
## Power Management
|
||||
|
||||
### Backup Drive Auto-Spindown
|
||||
|
||||
The backup mirror drive (`/dev/sda`) is configured to automatically spin down after 10 minutes of inactivity to save energy.
|
||||
|
||||
**Configuration:**
|
||||
- Spindown timeout: 10 minutes (120 * 5 seconds)
|
||||
- Managed by: udev rule `/etc/udev/rules.d/99-backup-drive-power.rules`
|
||||
- Auto-spinup: Drive wakes automatically when accessed
|
||||
|
||||
**Power Savings:**
|
||||
- Active power: ~5-8W
|
||||
- Standby power: <1W
|
||||
- Expected savings: ~35-50 kWh/year (~6 hours active per week for backups)
|
||||
|
||||
**Check Power State:**
|
||||
```bash
|
||||
# Check if drive is spun down
|
||||
sudo hdparm -C /dev/sda
|
||||
|
||||
# Manually spin down (for testing)
|
||||
sudo hdparm -y /dev/sda
|
||||
|
||||
# Manually spin up (wake drive)
|
||||
ls /mnt/backup-mirror
|
||||
```
|
||||
|
||||
**SMART Monitoring:**
|
||||
The backup drive is **excluded from continuous SMART monitoring** to avoid waking it up. Instead:
|
||||
- SMART health is checked automatically **during weekly backup runs**
|
||||
- Manual checks: `sudo smartctl -H /dev/sda` (will wake drive if needed)
|
||||
- Status logged in `/var/log/backup-mirror-sync.log`
|
||||
|
||||
## Performance Tips
|
||||
|
||||
1. **Bcache is most effective for frequently accessed data** - First access is slow (reads from HDD), subsequent accesses are fast (cached in NVMe)
|
||||
|
||||
2. **Monitor cache hit ratio** to understand effectiveness:
|
||||
```bash
|
||||
hits=$(cat /sys/block/bcache0/bcache/stats_total/cache_hits)
|
||||
misses=$(cat /sys/block/bcache0/bcache/stats_total/cache_misses)
|
||||
echo "scale=2; $hits / ($hits + $misses) * 100" | bc
|
||||
```
|
||||
|
||||
3. **Backup timing** - Sunday 2AM chosen to minimize impact on media streaming and other services
|
||||
|
||||
4. **SMART tests timing** - Daily tests only on active drives (`/dev/sdb`, `/dev/nvme0n1`). Backup drive checked during weekly backups.
|
||||
32
bitwarden-kimchi.crt
Normal file
32
bitwarden-kimchi.crt
Normal file
@@ -0,0 +1,32 @@
|
||||
-----BEGIN CERTIFICATE-----
|
||||
MIIFhTCCA20CFEBUadKj4TB8sOl7L5n6kvlY1X8UMA0GCSqGSIb3DQEBCwUAMH8x
|
||||
CzAJBgNVBAYTAkRFMQ0wCwYDVQQIDARob21lMQ0wCwYDVQQHDARob21lMRcwFQYD
|
||||
VQQJDA5TY2hpY2sgSG9zdGluZzEPMA0GA1UEAwwGVGhvbWFzMSgwJgYJKoZIhvcN
|
||||
AQkBFhl0aG9tYXMuc2NoaWNrQG1haWxib3gub3JnMB4XDTIyMTIxOTE3NTUxM1oX
|
||||
DTIzMTIxOTE3NTUxM1owfzELMAkGA1UEBhMCREUxDTALBgNVBAgMBGhvbWUxDTAL
|
||||
BgNVBAcMBGhvbWUxFzAVBgNVBAoMDlNjaGljayBIb3N0aW5nMQ8wDQYDVQQDDAZU
|
||||
aG9tYXMxKDAmBgkqhkiG9w0BCQEWGXRob21hcy5zY2hpY2tAbWFpbGJveC5vcmcw
|
||||
ggIiMA0GCSqGSIb3DQEBAQUAA4ICDwAwggIKAoICAQCykR0U2FU4bPaFQi6BnKg3
|
||||
/8v8cohx6gMGiBP0UeXW1+sfvCa1i1/sUnkfNjoCQNvWrU6F63Z5Vp+XJZJQSxm3
|
||||
xR24l7hZaJfaWacGgg59Sof4BKdXlIPHPu4bJGCtV2b4Tzw96Q2qLizPrv/zdmPI
|
||||
yA01W+BeJfQVKbcKlbI2m7je6RFAGcrdpxHVhIG12XNS/xYhDSk5JgnR/uOOh28W
|
||||
jhKOQmveI4mhP31nGOja/+KnrK0ppfwfQEPCnqx/S+OYRCQgeMwcTn5LizcIAVNP
|
||||
ZmWxZihe04azRMLEetmaMmGtEwbsgKQAWZATiwyvROfUked0mi5zu9I0LGZpvW1H
|
||||
Vkd+Z9VRbfa7htpG5rYA/J/2pq4dzhj5B3xzOc+MPlIi50dsOeO2XSfPyYbOu7RT
|
||||
HH0giEe8+N9Ly9WZEZEuJMQ6b57phtLR5Qqnf7dJXe5L2wnnOZHFhoNf1eZ2nvbA
|
||||
bvaiHeAVaZXExRTJ+NTRjeGeO84zEfb/pMfYcmL6ZclZjqQYBFpaL2YBqd585LCo
|
||||
ZWDA4yd36bDlPhCXgsvqTrg1j+i2brtGT5t2bOJPNHI7qux1Nk0JSjr0JNwz++60
|
||||
rDmGFgnGXXAoR8jN8dJSH2Ur3YxGrMKZvbC1SgiXFAwxNn08d/njbbrEi47i+BQd
|
||||
+ZVae7wc0zHai+keO8TppwIDAQABMA0GCSqGSIb3DQEBCwUAA4ICAQCCUk9UcJJ5
|
||||
omhBegzVR96Hi6sd4oc4PyBG/79S8o2NKQtlzxbGbIip/aukguLgiEnN009WOW9u
|
||||
pUSdj+B+Zfo17AkeuOl90kMEmhDzgNHQQvsdmYYRjTeDszCHkfU69xD/KbOsibyn
|
||||
Y2Y6K1jQKtN+IqIBqc5lkVoQdf9a1mqruD91L0AWiE5mDrpZruX52OsUbJaopjje
|
||||
Q5VQv+XfKqlrgTw6aVlR0RXB9dWESbxyUMFWAHT/BclZiclnc/hR8B7Fxlxl1km0
|
||||
8AzdjEa6jM8r1vVD9/OPY/cz0751n66JmiR6+MbA7YAmbKG1hP/+h1x7IBUE/V15
|
||||
B1NtVl3ixCvWlW8aIEhy+HsJnI2YgaSkUYql89ydnXIc0HzPHUYVuRdKbcI5qnF7
|
||||
MYv7nEnkgtTlpAxTF8sfNi2N3bby8wAljJT4ib/XMvj+9zWC2Wg1z1URWvlzmA/w
|
||||
FJXgGsO/wIViwVTDKXxsW0UZIsy3QKNQlHugKF7Ul1qpvFnNvsSOs6gNrBr1zjsI
|
||||
fKksnRQ3JY77MO4aG75NoEgwG4YkC5V0+MIYAI+x16irp4F6UF08Xw6xF9j0hlxc
|
||||
zptEnVTNjd0xf9CaXAkcZ3TnDyIHPTlhQZcBnL92DiLPCY3654D/jzhxx3z5o9gd
|
||||
NW8t1mgp1K2U6ZRptDZAl+mTdcVbNtmdBQ==
|
||||
-----END CERTIFICATE-----
|
||||
17
certs/bitwarden/ca.crt
Normal file
17
certs/bitwarden/ca.crt
Normal file
@@ -0,0 +1,17 @@
|
||||
-----BEGIN CERTIFICATE-----
|
||||
MIICyjCCAbKgAwIBAgIQcahwEco4B0rArLM4TRDaHzANBgkqhkiG9w0BAQsFADAA
|
||||
MB4XDTIzMDEyMTEyMDY0NVoXDTIzMDQyMTEyMDY0NVowADCCASIwDQYJKoZIhvcN
|
||||
AQEBBQADggEPADCCAQoCggEBALjBJrX7lbcxMP0CxOSlpPQ+ne4Mk1pFTLQFmE/E
|
||||
KTSXj39HHkpEXabFAQmjxSuL7CsNiRwqp7N+MJrU5aLsgolqc+xOGry+7T/rufhV
|
||||
2AvEQ3ekhSRVYlUUNMHZZ/NyTnEPq/w0A5Jflr5FJkvo1EP9m/dLRxxd0ZaL6M2l
|
||||
eskDv3Fk7yepObD+zrB1D2RfOP09mv2AQV3LWMwvACsm1DrFR+JjO60W8yFqfH22
|
||||
fSLgrmpR8P2ChSg4iIsipoQ4n9V3JBuSrp/haIW3GwN0wv0/X8PQTys+IZrhr3+K
|
||||
42UjB9Qaib6Js8sJk9L0W5WhnXqzHWyowXikj1lquwmUfA8CAwEAAaNAMD4wDgYD
|
||||
VR0PAQH/BAQDAgWgMAwGA1UdEwEB/wQCMAAwHgYDVR0RAQH/BBQwEoIQcGFzc3dv
|
||||
cmRzLmtpbWNoaTANBgkqhkiG9w0BAQsFAAOCAQEAW7SB478fnmOxN1KydlouIajp
|
||||
q3JplLUHVFf1VzvpogKliiNBMn+nnXSAxGz/aHHxtLCXRoqwf1iiu8O8YIMyz3Dq
|
||||
2HNS8HxSI8fkwGyBF2A0FguLEAbUnRzpf7Wcx0P4HJAMNG7aJSHpX57O999AAUWh
|
||||
M09ksJRE1Bbd9gSaMIf5V4k98iJNPVXPOY+Lx781Y+nOWTDnrUa6aKEtaQbEgoE3
|
||||
4H1vuPWweQh9zXCgMd73ue4raYGb5r/TvPN4ydNyyb2JKAfMD9HjS033L6LeRnNj
|
||||
SVtOD+M0TxC5NcjoPd4CSLErimNM6js2bKEduPH+sFfu/jXUg1yBihxX7YrhPw==
|
||||
-----END CERTIFICATE-----
|
||||
17
certs/bitwarden/tls.crt
Normal file
17
certs/bitwarden/tls.crt
Normal file
@@ -0,0 +1,17 @@
|
||||
-----BEGIN CERTIFICATE-----
|
||||
MIICyjCCAbKgAwIBAgIQcahwEco4B0rArLM4TRDaHzANBgkqhkiG9w0BAQsFADAA
|
||||
MB4XDTIzMDEyMTEyMDY0NVoXDTIzMDQyMTEyMDY0NVowADCCASIwDQYJKoZIhvcN
|
||||
AQEBBQADggEPADCCAQoCggEBALjBJrX7lbcxMP0CxOSlpPQ+ne4Mk1pFTLQFmE/E
|
||||
KTSXj39HHkpEXabFAQmjxSuL7CsNiRwqp7N+MJrU5aLsgolqc+xOGry+7T/rufhV
|
||||
2AvEQ3ekhSRVYlUUNMHZZ/NyTnEPq/w0A5Jflr5FJkvo1EP9m/dLRxxd0ZaL6M2l
|
||||
eskDv3Fk7yepObD+zrB1D2RfOP09mv2AQV3LWMwvACsm1DrFR+JjO60W8yFqfH22
|
||||
fSLgrmpR8P2ChSg4iIsipoQ4n9V3JBuSrp/haIW3GwN0wv0/X8PQTys+IZrhr3+K
|
||||
42UjB9Qaib6Js8sJk9L0W5WhnXqzHWyowXikj1lquwmUfA8CAwEAAaNAMD4wDgYD
|
||||
VR0PAQH/BAQDAgWgMAwGA1UdEwEB/wQCMAAwHgYDVR0RAQH/BBQwEoIQcGFzc3dv
|
||||
cmRzLmtpbWNoaTANBgkqhkiG9w0BAQsFAAOCAQEAW7SB478fnmOxN1KydlouIajp
|
||||
q3JplLUHVFf1VzvpogKliiNBMn+nnXSAxGz/aHHxtLCXRoqwf1iiu8O8YIMyz3Dq
|
||||
2HNS8HxSI8fkwGyBF2A0FguLEAbUnRzpf7Wcx0P4HJAMNG7aJSHpX57O999AAUWh
|
||||
M09ksJRE1Bbd9gSaMIf5V4k98iJNPVXPOY+Lx781Y+nOWTDnrUa6aKEtaQbEgoE3
|
||||
4H1vuPWweQh9zXCgMd73ue4raYGb5r/TvPN4ydNyyb2JKAfMD9HjS033L6LeRnNj
|
||||
SVtOD+M0TxC5NcjoPd4CSLErimNM6js2bKEduPH+sFfu/jXUg1yBihxX7YrhPw==
|
||||
-----END CERTIFICATE-----
|
||||
27
certs/bitwarden/tls.key
Normal file
27
certs/bitwarden/tls.key
Normal file
@@ -0,0 +1,27 @@
|
||||
-----BEGIN RSA PRIVATE KEY-----
|
||||
MIIEowIBAAKCAQEAuMEmtfuVtzEw/QLE5KWk9D6d7gyTWkVMtAWYT8QpNJePf0ce
|
||||
SkRdpsUBCaPFK4vsKw2JHCqns34wmtTlouyCiWpz7E4avL7tP+u5+FXYC8RDd6SF
|
||||
JFViVRQ0wdln83JOcQ+r/DQDkl+WvkUmS+jUQ/2b90tHHF3RlovozaV6yQO/cWTv
|
||||
J6k5sP7OsHUPZF84/T2a/YBBXctYzC8AKybUOsVH4mM7rRbzIWp8fbZ9IuCualHw
|
||||
/YKFKDiIiyKmhDif1XckG5Kun+FohbcbA3TC/T9fw9BPKz4hmuGvf4rjZSMH1BqJ
|
||||
vomzywmT0vRblaGderMdbKjBeKSPWWq7CZR8DwIDAQABAoIBAAwkTXn6Pb2bUv/d
|
||||
tbjdFfkjQFfLpcdx9HeEQp1DY/3b1AdmUhxJX+o82jOa+rNA79Vof1FFkF3gditG
|
||||
wIyzhGSphVLLU0CCP69Ku58RbTBgxppPSpy3q82xlUIEGqvKIFOX1xKtDGsLMynv
|
||||
+3NTqteJDD31SYgYtlRxf2w8atRY+Rs2u/UOMTDcaKneBbnqigYWubvkyc2OXU5+
|
||||
mETLmCoIcwz6OJd1TLMIiZOkJBqGD0QAewYMypIuWVzhzUrnmEklwp8RKvC1iMSS
|
||||
ciwGrZ18bBen2qDtRmn3WL+os96AhPJjgqjk16BXDRg5+wYeqWRwJPhLoQFqd72A
|
||||
QhIsoFECgYEA0od+wIVPil152uChKR76BS0fQqP2q2ICyDzgwONS8SM+W1NdZyuv
|
||||
FSHvksySTh/KuS1mC3qgMke4px8GB7hB6ry85u6sIzZk8B6nHY+mmSDPaCT2zhLM
|
||||
jQXK+MBGVpsjrLI1xv6XvOkphhTWmFKZ2tNcm0xZbR77VGpt3K6TtykCgYEA4KiG
|
||||
1U+pu77jbYZSSgML7g4CLpZe/JzZz1W3SdZeUiT7p9Klr1D4WhXBXsZCNw5ei2AB
|
||||
SvDL0bg0ikW0eW12uc65a6qQtyp0HuHoidExB8sUR4Aux/Vwul7lOAZUNdGkVFym
|
||||
lejPCxSYWHCavwkoATxvr5H6uyzveg8GZ3QamHcCgYEAtIuEdQAebW+qj71yGDy7
|
||||
d3LxywmoAePktOoYaPLKb4ek81bz1AWPeZUHyqHrmloDVXkMeS2pclU1kwS0/CvV
|
||||
Q8SmT3lBYFVGjPIMqPpHiiysEgkZKzLN/uaH4XmrGJylJHYUTlqJsHVYqeb2/dxg
|
||||
m1wFoB0C1+To7sTzAH0qqrkCgYBuHcyxI7IDh2Y8Wflds66WSaGCKkx2r38HZHFJ
|
||||
rNxgkSYUtWhmzV5d8YntpWnxSIbI9A7OJ8cPjaWbHN2AI0pteslh36G9Vf7C4GI1
|
||||
oybQNhdDkK3dbw2JHFhoJJoEIzTT8PHqSsmpGbguqUsAVkGYkYIA4aGvOzBKeLDf
|
||||
5oXeswKBgAwUxwTG5BWANsIhorVIP1Rn8y/v82693A/U5skXVcvpGmOfJ7CpicQI
|
||||
glCy5HumMi/QWJukE1Cl48fZH7SdTiFLT0pReo8LriM0RQXtqBrOjpFuWQxQ7yAA
|
||||
ldznojQylwoTc8AHK3FbEKUr3sh9WcKEIfmKPuLE9aS0iNZPY8YA
|
||||
-----END RSA PRIVATE KEY-----
|
||||
17
certs/jellyfin/ca.crt
Normal file
17
certs/jellyfin/ca.crt
Normal file
@@ -0,0 +1,17 @@
|
||||
-----BEGIN CERTIFICATE-----
|
||||
MIICxjCCAa6gAwIBAgIRAN7FuPHcNstZK6sXmGsl32cwDQYJKoZIhvcNAQELBQAw
|
||||
ADAeFw0yMzAxMjExMDE4MDNaFw0yMzA0MjExMDE4MDNaMAAwggEiMA0GCSqGSIb3
|
||||
DQEBAQUAA4IBDwAwggEKAoIBAQDrfvbZfmMtaduVRcWHN4LBo7QOpQTw3+aAgOCh
|
||||
/kEYlIfL6Pphd7MGz3JdeB4v2EjaPdY8GgdOPR8FfwzlMDKUAjF9vHGL//7cFLUA
|
||||
Izp8WyZECalSVS+n/2sYzlCwYCwO8ocCn0H+GxSEUk8UjKSMLa6MRoA7KKffnbvr
|
||||
iaIF2kQLKnBs0fSCm8aD7r9MN0nDqFjwRFrymRO12rpK8ph3kkIHfKqqUEWR1SXD
|
||||
Z50DS6EGXbcyQr9fdi9tfhuzTMRUy96KysRgF6NpTWp7j1DPrm2KlB9FlIzmHxhO
|
||||
s2LykOjkVHUdVEIS+Bo6VBg7WJJkmvRDhfyk3SnDVsKQdRaXAgMBAAGjOzA5MA4G
|
||||
A1UdDwEB/wQEAwIFoDAMBgNVHRMBAf8EAjAAMBkGA1UdEQEB/wQPMA2CC2tpbm8u
|
||||
a2ltY2hpMA0GCSqGSIb3DQEBCwUAA4IBAQDENHfPaTRTPAX+mbKpJvlZVqSXvc5b
|
||||
1lUH/+KUawBpxRJ2YfUUHIoKR0y7LClZIem+d1N3kyXBvbyz/Lql6oJ1MWHTUo8B
|
||||
9M03SAuqpMMljec/gfIxWsOBckprNUaL5lkdQE2O6fMh2KacaAiqD+wOTKmoH5oK
|
||||
RwlkXca4VjBKjpl8XA0EWEG1fuC+DXpq1OeFKigKe+/S2S/QMhqCMcULmaRtQ7sH
|
||||
9ub6p0AOl1zX1ZHuSVmI8kRM5TCV5b/eGvxEQLn5mPLjfuIUnFpF/Yok3SYNMFZn
|
||||
J1cm1vnNTtWUguPeaW7T4JiZZcZxHa0zofNiaEvcA7Eduv/ugId1mSPy
|
||||
-----END CERTIFICATE-----
|
||||
24
certs/jellyfin/tls.crt
Normal file
24
certs/jellyfin/tls.crt
Normal file
@@ -0,0 +1,24 @@
|
||||
-----BEGIN CERTIFICATE-----
|
||||
MIIEHDCCAwSgAwIBAgIUDaiD9A5/eJNxpQ29lklkGVQJT0QwDQYJKoZIhvcNAQEL
|
||||
BQAwgYgxCzAJBgNVBAYTAkRFMQ0wCwYDVQQIDARob21lMQ0wCwYDVQQHDARob21l
|
||||
MRcwFQYDVQQKDA5TY2hpY2sgSG9zdGluZzEYMBYGA1UEAwwPamVsbHlmaW4ua2lt
|
||||
Y2hpMSgwJgYJKoZIhvcNAQkBFhl0aG9tYXMuc2NoaWNrQG1haWxib3gub3JnMB4X
|
||||
DTI1MTAxODE1NTkxNloXDTI3MTAxODE1NTkxNlowgYgxCzAJBgNVBAYTAkRFMQ0w
|
||||
CwYDVQQIDARob21lMQ0wCwYDVQQHDARob21lMRcwFQYDVQQKDA5TY2hpY2sgSG9z
|
||||
dGluZzEYMBYGA1UEAwwPamVsbHlmaW4ua2ltY2hpMSgwJgYJKoZIhvcNAQkBFhl0
|
||||
aG9tYXMuc2NoaWNrQG1haWxib3gub3JnMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8A
|
||||
MIIBCgKCAQEAzPqqS2SfMrIF0rvTLIDXg4dYIbbh0bpljVMTgzcNIqCztIF/8ukG
|
||||
eo2Lut2Ou2y4Ritn4oeseem0adTDG169mqhrdjqCmTb/eJbn7hPwrdTJaSjzDvAO
|
||||
dijs5aVtrKfgKGtwicsThVRlnrinfc+2BPjdrTYnXhvavnvOPD1HrNA7stJHcDOM
|
||||
UKChIeVt8j4TaSWHBjBCJ1d66Oc3NSO4AjYO+m+5WZ9USzaBQBQyhhdAibE7Lucz
|
||||
sa+NH1JUuJOWGvz6o7bbc0vzrk9j6L+J1UgZZAfkmOBkemAnbDYm54JI6BwNdd0g
|
||||
J1619isGbTHEL68QXspKZLJ3GGXPD0S6tQIDAQABo3wwejAdBgNVHQ4EFgQU9/qN
|
||||
PbuWsqOpd/F5JTHOSLc+YxAwHwYDVR0jBBgwFoAU9/qNPbuWsqOpd/F5JTHOSLc+
|
||||
YxAwDwYDVR0TAQH/BAUwAwEB/zAnBgNVHREEIDAegg9qZWxseWZpbi5raW1jaGmC
|
||||
C2tpbm8ua2ltY2hpMA0GCSqGSIb3DQEBCwUAA4IBAQADZCRLWHJWD3XMEHIDPfqD
|
||||
sWEYz8wgnrSJOw7jcVk7b7X9H4vZmjKAYjL8SKU5Odf4SIoVa5Cbxm6y1NulHgkL
|
||||
Ott8JW6jBh08sEKNART3mXx6CzwqXj+L8qOVeKJlc/v3qe7uc2SppJO5Gr3Jx4Eh
|
||||
A1MR0tJrocot7ThY5X3gQNXtF4sysYHJvXDCcMk2xoH626hX+hOBvcT7Ov+e6GRM
|
||||
Rk+4XbHA2Zg0bbK8u3SIGP80dW9lRqCRfEpsfwHLS+1HemRZepRPsE+DExBF9iMx
|
||||
CFPPIz0jaAd+923XhQXWcOwPuHMTx6ZAqHDsO+Fd0GdcluaPyp8RP0oToblBaW43
|
||||
-----END CERTIFICATE-----
|
||||
28
certs/jellyfin/tls.key
Normal file
28
certs/jellyfin/tls.key
Normal file
@@ -0,0 +1,28 @@
|
||||
-----BEGIN PRIVATE KEY-----
|
||||
MIIEvQIBADANBgkqhkiG9w0BAQEFAASCBKcwggSjAgEAAoIBAQDM+qpLZJ8ysgXS
|
||||
u9MsgNeDh1ghtuHRumWNUxODNw0ioLO0gX/y6QZ6jYu63Y67bLhGK2fih6x56bRp
|
||||
1MMbXr2aqGt2OoKZNv94lufuE/Ct1MlpKPMO8A52KOzlpW2sp+Aoa3CJyxOFVGWe
|
||||
uKd9z7YE+N2tNideG9q+e848PUes0Duy0kdwM4xQoKEh5W3yPhNpJYcGMEInV3ro
|
||||
5zc1I7gCNg76b7lZn1RLNoFAFDKGF0CJsTsu5zOxr40fUlS4k5Ya/PqjtttzS/Ou
|
||||
T2Pov4nVSBlkB+SY4GR6YCdsNibngkjoHA113SAnXrX2KwZtMcQvrxBeykpksncY
|
||||
Zc8PRLq1AgMBAAECggEAYbCtbJUeEkkp4U9Gy5T0IWllpVvFr/DH6VzIleasye4Q
|
||||
91wooJpSTiIbkAl7tvOPt1GEhz/mAYxSQYX3Hpo+fvD5ljU4fNDrXIt/KcYzFIWv
|
||||
IAE2Jc5e9g13KdN3u6ued2UNE37HZOneLJEQsjNGKoR5Ec4XYRChZdsXZTpHaKJS
|
||||
9w9Renc4B4ULQrEDDdRge7mYlXZ2QTFxkxQWopmPYwvs+T4LxyxLFm6oiodZcbXy
|
||||
h1BooFHb6ikw1T5V/ubPT7MCR6PTPoJowTUVZzZyImNXlGONlIO64S7NbPbdYZHB
|
||||
/P5gG3PODr7bHzkuLi3oLlHplM9Z8xFgKkWx+V2PmQKBgQDdXEGJodhUCPIPAIOk
|
||||
/t9NZKSQ9Fg/OdPe+gdRfhW6oPmZSXfMjvahZkJNi18ECi1l0lzkNbzYtav1yz3t
|
||||
t+YNIPak590Sl/kYsvy/POHdpOtTG8pLupE7LgfUSO/EHnCaV1KUBY4aJp3LpiCY
|
||||
Ea7q/Qa7Pqu9d01hCNd3o9o/ZwKBgQDtDiycvpCSscPCgastAMBF3Gx32yJBQSaL
|
||||
D6Ed5wzLEQzbJo8i7PvrvLqj6yGiRW4fY9GkyDKIhbereJvXsLFaTaf2zNUW0PM6
|
||||
a9/Tb7k0c7kYxVq+OeH+MfS0BMySOLEiY1glDVUG7sviBHgmdzarht1QRHQhA0SR
|
||||
6H587t7PgwKBgBHkMAPgyexY4L+nqfw/AWtu9AInTa6mjOJb0RWcHEN+WU4zavRk
|
||||
pbh73GYKGr7n162AKDPlyAK4BFMUf0fkcjqjbGv9tZeYIvEFHnqSgCr69m48M8iV
|
||||
JsHiwY096+stDqra3fjKziZ88ooQPlgsLbgehVnDAfyJVP6/yTKJUs2HAoGADxro
|
||||
HNTHwZEyOCKrFaMGnWz+PGTqOd485n+IdK9UUVw0xYIffMo9AzhzbB5/dieWbMmf
|
||||
gjB/h9N9cJ+uzn+jzW1FVqSWr22BEiftizuDQaReFwX8UkK988SbIx1rK6YRI2/R
|
||||
HgtLb7WnqC9AuLK/+Q4O7B5wh+n9ZI68AJn3+KECgYEApg6tvuAayPY0gAmLQAj/
|
||||
H08mJY/9trVQ7m4ix+FvIAtfEq1WuACVwvNcrvaCrJtgQaOp47l6ANjz0K9pCYwO
|
||||
cA49raEPt3WHi7tcDr/0O1RpeCQq2htgrVAhCSa636C4bQJOVf5mhA/B92FEdQ0i
|
||||
2dSNSsyRqysbTZtRtFfX9Dg=
|
||||
-----END PRIVATE KEY-----
|
||||
24
certs/openbooks/tls.crt
Normal file
24
certs/openbooks/tls.crt
Normal file
@@ -0,0 +1,24 @@
|
||||
-----BEGIN CERTIFICATE-----
|
||||
MIIEGDCCAwCgAwIBAgIURjinqgedQmHtFvJdEX7CGxwo1vEwDQYJKoZIhvcNAQEL
|
||||
BQAwgYUxCzAJBgNVBAYTAkRFMQ0wCwYDVQQIDARob21lMQ0wCwYDVQQHDARob21l
|
||||
MRcwFQYDVQQKDA5TY2hpY2sgSG9zdGluZzEVMBMGA1UEAwwMYm9va3Mua2ltY2hp
|
||||
MSgwJgYJKoZIhvcNAQkBFhl0aG9tYXMuc2NoaWNrQG1haWxib3gub3JnMB4XDTI1
|
||||
MTAxODE2MDkyNloXDTI3MTAxODE2MDkyNlowgYUxCzAJBgNVBAYTAkRFMQ0wCwYD
|
||||
VQQIDARob21lMQ0wCwYDVQQHDARob21lMRcwFQYDVQQKDA5TY2hpY2sgSG9zdGlu
|
||||
ZzEVMBMGA1UEAwwMYm9va3Mua2ltY2hpMSgwJgYJKoZIhvcNAQkBFhl0aG9tYXMu
|
||||
c2NoaWNrQG1haWxib3gub3JnMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKC
|
||||
AQEAuYz7zuT2esbWyO999fDgRrV/mB9XjSoe7HI+YEAmv+mrDNV2JXYeXKFOFTVZ
|
||||
0sy8PlIUZ1wb+zBd6a+ySRHYKLvPm2BUOyo1AlquMXfqQ8VtL8Z67ghwcyCkejLV
|
||||
GctshDg8z1jypm7hNeNY505l+RO7BW3HbB2aCeKLK9D9YMqh6ISXSgS70rlvINdG
|
||||
w/OMNaffxItiG+AwaefUzPncEI4tLUMf06hcbcXACMU3NCfpk7aZqUzFoALMTCMR
|
||||
DUx7bjfO/rW4eSn5xGYJOIXb8nQHXvdThxbd2s6FFDFNU3L8vlrrtBO83nUJ1fBx
|
||||
t5Zp2nfr12Y7nKTpC2gKkZjH4QIDAQABo34wfDAdBgNVHQ4EFgQUXD86UEB+wT73
|
||||
ADb5ZKH2Djv45S4wHwYDVR0jBBgwFoAUXD86UEB+wT73ADb5ZKH2Djv45S4wDwYD
|
||||
VR0TAQH/BAUwAwEB/zApBgNVHREEIjAgggxib29rcy5raW1jaGmCEG9wZW5ib29r
|
||||
cy5raW1jaGkwDQYJKoZIhvcNAQELBQADggEBAHGWvGNf0c4+AfbKsztl5qVT1gwf
|
||||
/L+3GWPKqA0DdRYIhSUkTzd7iHE9zGwKpf35b7O5ut8jtcHNfvjImg5QW57XJ0e5
|
||||
FuMxeqebltAhwHuTK9JI5P6aj49WOHcdbNvQsKEzG8KrllW6lSpE24w8TfUls5a5
|
||||
6KuHYzVcmcq1bsSXPqWKyT6jj2VmwUmlG5awBtiApAqrygjgipnH4xacH3f4BVvD
|
||||
giXxTKGdPJC/3Rlc4sXUdKNrfFAB9ZQ55oA7DchvMyRpTPPBqYUlbQ3dkD3lEtoh
|
||||
upmVQ9oTAmi/qPOO8O4nhBYaXm/vm8RmYqiQ+rRucNBup/9xrQXbYbREInY=
|
||||
-----END CERTIFICATE-----
|
||||
28
certs/openbooks/tls.key
Normal file
28
certs/openbooks/tls.key
Normal file
@@ -0,0 +1,28 @@
|
||||
-----BEGIN PRIVATE KEY-----
|
||||
MIIEvQIBADANBgkqhkiG9w0BAQEFAASCBKcwggSjAgEAAoIBAQC5jPvO5PZ6xtbI
|
||||
73318OBGtX+YH1eNKh7scj5gQCa/6asM1XYldh5coU4VNVnSzLw+UhRnXBv7MF3p
|
||||
r7JJEdgou8+bYFQ7KjUCWq4xd+pDxW0vxnruCHBzIKR6MtUZy2yEODzPWPKmbuE1
|
||||
41jnTmX5E7sFbcdsHZoJ4osr0P1gyqHohJdKBLvSuW8g10bD84w1p9/Ei2Ib4DBp
|
||||
59TM+dwQji0tQx/TqFxtxcAIxTc0J+mTtpmpTMWgAsxMIxENTHtuN87+tbh5KfnE
|
||||
Zgk4hdvydAde91OHFt3azoUUMU1Tcvy+Wuu0E7zedQnV8HG3lmnad+vXZjucpOkL
|
||||
aAqRmMfhAgMBAAECggEAR6eCrJSt/J7h3gnidkIVkijQA9KCsHCGLaA2p1vOuwkF
|
||||
NbkPXYseUX43VahbLjVDMrvmxj2DTr8QXUiszFv4Qk647xNFo+16OBUFtPPOg2pv
|
||||
7mWzHk4jAsqlyczsj4AHwY2oKhh66Dvke7d2oyia55OvgvqwaveJf2y9eufDmYcs
|
||||
pZafa2wR1Xg+ATAiAD6cuEOAfRwfcftkQPSnRKgMzje/3jEMgSWmshY5wLgNfqiN
|
||||
NXjgUfXzHLtj5gCm6spmUQM1KIQ3yAdXqOddSkNKY1hr8D0fpXuFOH1h0Cv75oHm
|
||||
pWWVdTNhX9KkhyI0MAh3Q0m9d3U5oCSZBJUdIKWm+wKBgQD+S1T0rHSJwGgSNQAf
|
||||
LW35bAlUxdjXO62tP2yoK9RBysxlrshCD7s7zVG9C5VSXxo4bOKJlI8Xj8BCEoZN
|
||||
PH8jITBgifhSyrXOp2U/3EvHBrdYSjdExJzVtOLHihu8Jkp3wab/8bxUCZ8tSsR5
|
||||
iYi8Krnua2VRFkSAWPagfhQ8YwKBgQC6y5thzAW7O6SCj39iIXzuKg9rtSDsfw2D
|
||||
gutJ6Ofin6ew/QbX2yjF781vwMaxoIeqzyjE+O/HmPJBI4+OJ7qQpLEls0iCG5Ch
|
||||
cCt5yGm3tpGLe+89tpg/tfCSh3OIBGazdAd9LA0/0oi1oLRicpPAjcUxxV/SKKWn
|
||||
5mDJtm4T6wKBgAI+I9eslbKJUeGnOgMMYYXroAFxZUIwso1um8S37j1OTpMvAXEj
|
||||
tmEGpIvoSD7bu913iF/yQXjRub5bb3fK6swihMy1Ks2AIC5cZ5YymTB+LKvIq8gd
|
||||
e8yetclQvIHiTJHV0WU8eo67Lv41RJpVzjDqp40kwVX/vkbrgfFUa1VFAoGAGT6V
|
||||
FEV3bNNlq0Nrar6t3J4QkXTcKzoMgH57//QbCpSbHB9GAnwa6Y08DWNXNwBD9YCj
|
||||
uOMPvMDd1JHSv9p8qzmmuzqcjQDergKzzXSZXPuuddRdA9EeiFW1Wog1w+ccXhpL
|
||||
PM5sR/jTAwDiAAAOGdLPGKfdCFD3+lX3NKuT+tsCgYEAmAFPvwy3ERcLvRXl7Uva
|
||||
MhXPh97tIJk2H/tyBKsyoUbuwtcHM6AeWywyDnLHYCqXiFY4/AdzMdSWhaN2K+H4
|
||||
GI9mGU5T22T+K/Sz+S6xNSdqnVsRnFcMz9Vj0D4XGE8y8cAaA6TATdi2uBrZ0oTS
|
||||
fL+TfdB/dWzbKzM2N6cSx+E=
|
||||
-----END PRIVATE KEY-----
|
||||
331
get_helm.sh
Executable file
331
get_helm.sh
Executable file
@@ -0,0 +1,331 @@
|
||||
#!/usr/bin/env bash
|
||||
|
||||
# Copyright The Helm Authors.
|
||||
#
|
||||
# Licensed under the Apache License, Version 2.0 (the "License");
|
||||
# you may not use this file except in compliance with the License.
|
||||
# You may obtain a copy of the License at
|
||||
#
|
||||
# http://www.apache.org/licenses/LICENSE-2.0
|
||||
#
|
||||
# Unless required by applicable law or agreed to in writing, software
|
||||
# distributed under the License is distributed on an "AS IS" BASIS,
|
||||
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
# See the License for the specific language governing permissions and
|
||||
# limitations under the License.
|
||||
|
||||
# The install script is based off of the MIT-licensed script from glide,
|
||||
# the package manager for Go: https://github.com/Masterminds/glide.sh/blob/master/get
|
||||
|
||||
: ${BINARY_NAME:="helm"}
|
||||
: ${USE_SUDO:="true"}
|
||||
: ${DEBUG:="false"}
|
||||
: ${VERIFY_CHECKSUM:="true"}
|
||||
: ${VERIFY_SIGNATURES:="false"}
|
||||
: ${HELM_INSTALL_DIR:="/usr/local/bin"}
|
||||
: ${GPG_PUBRING:="pubring.kbx"}
|
||||
|
||||
HAS_CURL="$(type "curl" &> /dev/null && echo true || echo false)"
|
||||
HAS_WGET="$(type "wget" &> /dev/null && echo true || echo false)"
|
||||
HAS_OPENSSL="$(type "openssl" &> /dev/null && echo true || echo false)"
|
||||
HAS_GPG="$(type "gpg" &> /dev/null && echo true || echo false)"
|
||||
HAS_GIT="$(type "git" &> /dev/null && echo true || echo false)"
|
||||
|
||||
# initArch discovers the architecture for this system.
|
||||
initArch() {
|
||||
ARCH=$(uname -m)
|
||||
case $ARCH in
|
||||
armv5*) ARCH="armv5";;
|
||||
armv6*) ARCH="armv6";;
|
||||
armv7*) ARCH="arm";;
|
||||
aarch64) ARCH="arm64";;
|
||||
x86) ARCH="386";;
|
||||
x86_64) ARCH="amd64";;
|
||||
i686) ARCH="386";;
|
||||
i386) ARCH="386";;
|
||||
esac
|
||||
}
|
||||
|
||||
# initOS discovers the operating system for this system.
|
||||
initOS() {
|
||||
OS=$(echo `uname`|tr '[:upper:]' '[:lower:]')
|
||||
|
||||
case "$OS" in
|
||||
# Minimalist GNU for Windows
|
||||
mingw*|cygwin*) OS='windows';;
|
||||
esac
|
||||
}
|
||||
|
||||
# runs the given command as root (detects if we are root already)
|
||||
runAsRoot() {
|
||||
if [ $EUID -ne 0 -a "$USE_SUDO" = "true" ]; then
|
||||
sudo "${@}"
|
||||
else
|
||||
"${@}"
|
||||
fi
|
||||
}
|
||||
|
||||
# verifySupported checks that the os/arch combination is supported for
|
||||
# binary builds, as well whether or not necessary tools are present.
|
||||
verifySupported() {
|
||||
local supported="darwin-amd64\ndarwin-arm64\nlinux-386\nlinux-amd64\nlinux-arm\nlinux-arm64\nlinux-ppc64le\nlinux-s390x\nwindows-amd64"
|
||||
if ! echo "${supported}" | grep -q "${OS}-${ARCH}"; then
|
||||
echo "No prebuilt binary for ${OS}-${ARCH}."
|
||||
echo "To build from source, go to https://github.com/helm/helm"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
if [ "${HAS_CURL}" != "true" ] && [ "${HAS_WGET}" != "true" ]; then
|
||||
echo "Either curl or wget is required"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
if [ "${VERIFY_CHECKSUM}" == "true" ] && [ "${HAS_OPENSSL}" != "true" ]; then
|
||||
echo "In order to verify checksum, openssl must first be installed."
|
||||
echo "Please install openssl or set VERIFY_CHECKSUM=false in your environment."
|
||||
exit 1
|
||||
fi
|
||||
|
||||
if [ "${VERIFY_SIGNATURES}" == "true" ]; then
|
||||
if [ "${HAS_GPG}" != "true" ]; then
|
||||
echo "In order to verify signatures, gpg must first be installed."
|
||||
echo "Please install gpg or set VERIFY_SIGNATURES=false in your environment."
|
||||
exit 1
|
||||
fi
|
||||
if [ "${OS}" != "linux" ]; then
|
||||
echo "Signature verification is currently only supported on Linux."
|
||||
echo "Please set VERIFY_SIGNATURES=false or verify the signatures manually."
|
||||
exit 1
|
||||
fi
|
||||
fi
|
||||
|
||||
if [ "${HAS_GIT}" != "true" ]; then
|
||||
echo "[WARNING] Could not find git. It is required for plugin installation."
|
||||
fi
|
||||
}
|
||||
|
||||
# checkDesiredVersion checks if the desired version is available.
|
||||
checkDesiredVersion() {
|
||||
if [ "x$DESIRED_VERSION" == "x" ]; then
|
||||
# Get tag from release URL
|
||||
local latest_release_url="https://github.com/helm/helm/releases"
|
||||
if [ "${HAS_CURL}" == "true" ]; then
|
||||
TAG=$(curl -Ls $latest_release_url | grep 'href="/helm/helm/releases/tag/v3.[0-9]*.[0-9]*\"' | sed -E 's/.*\/helm\/helm\/releases\/tag\/(v[0-9\.]+)".*/\1/g' | head -1)
|
||||
elif [ "${HAS_WGET}" == "true" ]; then
|
||||
TAG=$(wget $latest_release_url -O - 2>&1 | grep 'href="/helm/helm/releases/tag/v3.[0-9]*.[0-9]*\"' | sed -E 's/.*\/helm\/helm\/releases\/tag\/(v[0-9\.]+)".*/\1/g' | head -1)
|
||||
fi
|
||||
else
|
||||
TAG=$DESIRED_VERSION
|
||||
fi
|
||||
}
|
||||
|
||||
# checkHelmInstalledVersion checks which version of helm is installed and
|
||||
# if it needs to be changed.
|
||||
checkHelmInstalledVersion() {
|
||||
if [[ -f "${HELM_INSTALL_DIR}/${BINARY_NAME}" ]]; then
|
||||
local version=$("${HELM_INSTALL_DIR}/${BINARY_NAME}" version --template="{{ .Version }}")
|
||||
if [[ "$version" == "$TAG" ]]; then
|
||||
echo "Helm ${version} is already ${DESIRED_VERSION:-latest}"
|
||||
return 0
|
||||
else
|
||||
echo "Helm ${TAG} is available. Changing from version ${version}."
|
||||
return 1
|
||||
fi
|
||||
else
|
||||
return 1
|
||||
fi
|
||||
}
|
||||
|
||||
# downloadFile downloads the latest binary package and also the checksum
|
||||
# for that binary.
|
||||
downloadFile() {
|
||||
HELM_DIST="helm-$TAG-$OS-$ARCH.tar.gz"
|
||||
DOWNLOAD_URL="https://get.helm.sh/$HELM_DIST"
|
||||
CHECKSUM_URL="$DOWNLOAD_URL.sha256"
|
||||
HELM_TMP_ROOT="$(mktemp -dt helm-installer-XXXXXX)"
|
||||
HELM_TMP_FILE="$HELM_TMP_ROOT/$HELM_DIST"
|
||||
HELM_SUM_FILE="$HELM_TMP_ROOT/$HELM_DIST.sha256"
|
||||
echo "Downloading $DOWNLOAD_URL"
|
||||
if [ "${HAS_CURL}" == "true" ]; then
|
||||
curl -SsL "$CHECKSUM_URL" -o "$HELM_SUM_FILE"
|
||||
curl -SsL "$DOWNLOAD_URL" -o "$HELM_TMP_FILE"
|
||||
elif [ "${HAS_WGET}" == "true" ]; then
|
||||
wget -q -O "$HELM_SUM_FILE" "$CHECKSUM_URL"
|
||||
wget -q -O "$HELM_TMP_FILE" "$DOWNLOAD_URL"
|
||||
fi
|
||||
}
|
||||
|
||||
# verifyFile verifies the SHA256 checksum of the binary package
|
||||
# and the GPG signatures for both the package and checksum file
|
||||
# (depending on settings in environment).
|
||||
verifyFile() {
|
||||
if [ "${VERIFY_CHECKSUM}" == "true" ]; then
|
||||
verifyChecksum
|
||||
fi
|
||||
if [ "${VERIFY_SIGNATURES}" == "true" ]; then
|
||||
verifySignatures
|
||||
fi
|
||||
}
|
||||
|
||||
# installFile installs the Helm binary.
|
||||
installFile() {
|
||||
HELM_TMP="$HELM_TMP_ROOT/$BINARY_NAME"
|
||||
mkdir -p "$HELM_TMP"
|
||||
tar xf "$HELM_TMP_FILE" -C "$HELM_TMP"
|
||||
HELM_TMP_BIN="$HELM_TMP/$OS-$ARCH/helm"
|
||||
echo "Preparing to install $BINARY_NAME into ${HELM_INSTALL_DIR}"
|
||||
runAsRoot cp "$HELM_TMP_BIN" "$HELM_INSTALL_DIR/$BINARY_NAME"
|
||||
echo "$BINARY_NAME installed into $HELM_INSTALL_DIR/$BINARY_NAME"
|
||||
}
|
||||
|
||||
# verifyChecksum verifies the SHA256 checksum of the binary package.
|
||||
verifyChecksum() {
|
||||
printf "Verifying checksum... "
|
||||
local sum=$(openssl sha1 -sha256 ${HELM_TMP_FILE} | awk '{print $2}')
|
||||
local expected_sum=$(cat ${HELM_SUM_FILE})
|
||||
if [ "$sum" != "$expected_sum" ]; then
|
||||
echo "SHA sum of ${HELM_TMP_FILE} does not match. Aborting."
|
||||
exit 1
|
||||
fi
|
||||
echo "Done."
|
||||
}
|
||||
|
||||
# verifySignatures obtains the latest KEYS file from GitHub main branch
|
||||
# as well as the signature .asc files from the specific GitHub release,
|
||||
# then verifies that the release artifacts were signed by a maintainer's key.
|
||||
verifySignatures() {
|
||||
printf "Verifying signatures... "
|
||||
local keys_filename="KEYS"
|
||||
local github_keys_url="https://raw.githubusercontent.com/helm/helm/main/${keys_filename}"
|
||||
if [ "${HAS_CURL}" == "true" ]; then
|
||||
curl -SsL "${github_keys_url}" -o "${HELM_TMP_ROOT}/${keys_filename}"
|
||||
elif [ "${HAS_WGET}" == "true" ]; then
|
||||
wget -q -O "${HELM_TMP_ROOT}/${keys_filename}" "${github_keys_url}"
|
||||
fi
|
||||
local gpg_keyring="${HELM_TMP_ROOT}/keyring.gpg"
|
||||
local gpg_homedir="${HELM_TMP_ROOT}/gnupg"
|
||||
mkdir -p -m 0700 "${gpg_homedir}"
|
||||
local gpg_stderr_device="/dev/null"
|
||||
if [ "${DEBUG}" == "true" ]; then
|
||||
gpg_stderr_device="/dev/stderr"
|
||||
fi
|
||||
gpg --batch --quiet --homedir="${gpg_homedir}" --import "${HELM_TMP_ROOT}/${keys_filename}" 2> "${gpg_stderr_device}"
|
||||
gpg --batch --no-default-keyring --keyring "${gpg_homedir}/${GPG_PUBRING}" --export > "${gpg_keyring}"
|
||||
local github_release_url="https://github.com/helm/helm/releases/download/${TAG}"
|
||||
if [ "${HAS_CURL}" == "true" ]; then
|
||||
curl -SsL "${github_release_url}/helm-${TAG}-${OS}-${ARCH}.tar.gz.sha256.asc" -o "${HELM_TMP_ROOT}/helm-${TAG}-${OS}-${ARCH}.tar.gz.sha256.asc"
|
||||
curl -SsL "${github_release_url}/helm-${TAG}-${OS}-${ARCH}.tar.gz.asc" -o "${HELM_TMP_ROOT}/helm-${TAG}-${OS}-${ARCH}.tar.gz.asc"
|
||||
elif [ "${HAS_WGET}" == "true" ]; then
|
||||
wget -q -O "${HELM_TMP_ROOT}/helm-${TAG}-${OS}-${ARCH}.tar.gz.sha256.asc" "${github_release_url}/helm-${TAG}-${OS}-${ARCH}.tar.gz.sha256.asc"
|
||||
wget -q -O "${HELM_TMP_ROOT}/helm-${TAG}-${OS}-${ARCH}.tar.gz.asc" "${github_release_url}/helm-${TAG}-${OS}-${ARCH}.tar.gz.asc"
|
||||
fi
|
||||
local error_text="If you think this might be a potential security issue,"
|
||||
error_text="${error_text}\nplease see here: https://github.com/helm/community/blob/master/SECURITY.md"
|
||||
local num_goodlines_sha=$(gpg --verify --keyring="${gpg_keyring}" --status-fd=1 "${HELM_TMP_ROOT}/helm-${TAG}-${OS}-${ARCH}.tar.gz.sha256.asc" 2> "${gpg_stderr_device}" | grep -c -E '^\[GNUPG:\] (GOODSIG|VALIDSIG)')
|
||||
if [[ ${num_goodlines_sha} -lt 2 ]]; then
|
||||
echo "Unable to verify the signature of helm-${TAG}-${OS}-${ARCH}.tar.gz.sha256!"
|
||||
echo -e "${error_text}"
|
||||
exit 1
|
||||
fi
|
||||
local num_goodlines_tar=$(gpg --verify --keyring="${gpg_keyring}" --status-fd=1 "${HELM_TMP_ROOT}/helm-${TAG}-${OS}-${ARCH}.tar.gz.asc" 2> "${gpg_stderr_device}" | grep -c -E '^\[GNUPG:\] (GOODSIG|VALIDSIG)')
|
||||
if [[ ${num_goodlines_tar} -lt 2 ]]; then
|
||||
echo "Unable to verify the signature of helm-${TAG}-${OS}-${ARCH}.tar.gz!"
|
||||
echo -e "${error_text}"
|
||||
exit 1
|
||||
fi
|
||||
echo "Done."
|
||||
}
|
||||
|
||||
# fail_trap is executed if an error occurs.
|
||||
fail_trap() {
|
||||
result=$?
|
||||
if [ "$result" != "0" ]; then
|
||||
if [[ -n "$INPUT_ARGUMENTS" ]]; then
|
||||
echo "Failed to install $BINARY_NAME with the arguments provided: $INPUT_ARGUMENTS"
|
||||
help
|
||||
else
|
||||
echo "Failed to install $BINARY_NAME"
|
||||
fi
|
||||
echo -e "\tFor support, go to https://github.com/helm/helm."
|
||||
fi
|
||||
cleanup
|
||||
exit $result
|
||||
}
|
||||
|
||||
# testVersion tests the installed client to make sure it is working.
|
||||
testVersion() {
|
||||
set +e
|
||||
HELM="$(command -v $BINARY_NAME)"
|
||||
if [ "$?" = "1" ]; then
|
||||
echo "$BINARY_NAME not found. Is $HELM_INSTALL_DIR on your "'$PATH?'
|
||||
exit 1
|
||||
fi
|
||||
set -e
|
||||
}
|
||||
|
||||
# help provides possible cli installation arguments
|
||||
help () {
|
||||
echo "Accepted cli arguments are:"
|
||||
echo -e "\t[--help|-h ] ->> prints this help"
|
||||
echo -e "\t[--version|-v <desired_version>] . When not defined it fetches the latest release from GitHub"
|
||||
echo -e "\te.g. --version v3.0.0 or -v canary"
|
||||
echo -e "\t[--no-sudo] ->> install without sudo"
|
||||
}
|
||||
|
||||
# cleanup temporary files to avoid https://github.com/helm/helm/issues/2977
|
||||
cleanup() {
|
||||
if [[ -d "${HELM_TMP_ROOT:-}" ]]; then
|
||||
rm -rf "$HELM_TMP_ROOT"
|
||||
fi
|
||||
}
|
||||
|
||||
# Execution
|
||||
|
||||
#Stop execution on any error
|
||||
trap "fail_trap" EXIT
|
||||
set -e
|
||||
|
||||
# Set debug if desired
|
||||
if [ "${DEBUG}" == "true" ]; then
|
||||
set -x
|
||||
fi
|
||||
|
||||
# Parsing input arguments (if any)
|
||||
export INPUT_ARGUMENTS="${@}"
|
||||
set -u
|
||||
while [[ $# -gt 0 ]]; do
|
||||
case $1 in
|
||||
'--version'|-v)
|
||||
shift
|
||||
if [[ $# -ne 0 ]]; then
|
||||
export DESIRED_VERSION="${1}"
|
||||
else
|
||||
echo -e "Please provide the desired version. e.g. --version v3.0.0 or -v canary"
|
||||
exit 0
|
||||
fi
|
||||
;;
|
||||
'--no-sudo')
|
||||
USE_SUDO="false"
|
||||
;;
|
||||
'--help'|-h)
|
||||
help
|
||||
exit 0
|
||||
;;
|
||||
*) exit 1
|
||||
;;
|
||||
esac
|
||||
shift
|
||||
done
|
||||
set +u
|
||||
|
||||
initArch
|
||||
initOS
|
||||
verifySupported
|
||||
checkDesiredVersion
|
||||
if ! checkHelmInstalledVersion; then
|
||||
downloadFile
|
||||
verifyFile
|
||||
installFile
|
||||
fi
|
||||
testVersion
|
||||
cleanup
|
||||
4
k8s/core/bitwarden/_namespace.yml
Normal file
4
k8s/core/bitwarden/_namespace.yml
Normal file
@@ -0,0 +1,4 @@
|
||||
apiVersion: v1
|
||||
kind: Namespace
|
||||
metadata:
|
||||
name: bitwarden
|
||||
12
k8s/core/bitwarden/admin-secret.yml
Normal file
12
k8s/core/bitwarden/admin-secret.yml
Normal file
@@ -0,0 +1,12 @@
|
||||
|
||||
apiVersion: v1
|
||||
kind: Secret
|
||||
metadata:
|
||||
name: bitwarden-admin-token
|
||||
namespace: bitwarden
|
||||
labels:
|
||||
app: bitwarden
|
||||
type: Opaque
|
||||
data:
|
||||
# openssl rand -base64 48
|
||||
token: "kU7bn7fyOjRkNIglQwcvgGOhjT2YCCiMGNBYdlfT5uQjsonJOWmh1pB0xe83jnfk"
|
||||
28
k8s/core/bitwarden/certificates.yml
Normal file
28
k8s/core/bitwarden/certificates.yml
Normal file
@@ -0,0 +1,28 @@
|
||||
---
|
||||
apiVersion: cert-manager.io/v1
|
||||
kind: Certificate
|
||||
metadata:
|
||||
name: bitwarden-kimchi-cert
|
||||
namespace: bitwarden
|
||||
spec:
|
||||
secretName: bitwarden-kimchi-tls
|
||||
issuerRef:
|
||||
name: cluster-issuer-selfsigned
|
||||
kind: ClusterIssuer
|
||||
commonName: passwords.kimchi
|
||||
dnsNames:
|
||||
- passwords.kimchi
|
||||
---
|
||||
apiVersion: cert-manager.io/v1
|
||||
kind: Certificate
|
||||
metadata:
|
||||
name: bitwarden-public-cert
|
||||
namespace: bitwarden
|
||||
spec:
|
||||
secretName: passwords-schick-web-site-tls
|
||||
issuerRef:
|
||||
name: letsencrypt-prod
|
||||
kind: ClusterIssuer
|
||||
commonName: passwords.schick-web.site
|
||||
dnsNames:
|
||||
- passwords.schick-web.site
|
||||
17
k8s/core/bitwarden/claims.yml
Normal file
17
k8s/core/bitwarden/claims.yml
Normal file
@@ -0,0 +1,17 @@
|
||||
---
|
||||
kind: PersistentVolumeClaim
|
||||
apiVersion: v1
|
||||
metadata:
|
||||
labels:
|
||||
app: bitwarden
|
||||
name: bitwarden-data-claim
|
||||
namespace: bitwarden
|
||||
# annotations:
|
||||
# volume.beta.kubernetes.io/storage-class: "managed-nfs-storage-bitwarden"
|
||||
spec:
|
||||
accessModes:
|
||||
- ReadWriteOnce
|
||||
storageClassName: local-path
|
||||
resources:
|
||||
requests:
|
||||
storage: 2Gi
|
||||
40
k8s/core/bitwarden/configmaps.yml
Normal file
40
k8s/core/bitwarden/configmaps.yml
Normal file
@@ -0,0 +1,40 @@
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: ConfigMap
|
||||
metadata:
|
||||
name: bitwarden
|
||||
namespace: bitwarden
|
||||
labels:
|
||||
app: bitwarden
|
||||
data:
|
||||
ADMIN_TOKEN: "kU7bn7fyOjRkNIglQwcvgGOhjT2YCCiMGNBYdlfT5uQjsonJOWmh1pB0xe83jnfk"
|
||||
# SMTP settings, see:
|
||||
# https://github.com/dani-garcia/bitwarden_rs/blob/master/README.md#smtp-configuration
|
||||
SMTP_HOST: "thomas.schick@mailbox.org"
|
||||
SMTP_FROM: "thomas.schick@mailbox.org"
|
||||
SMTP_PORT: "587"
|
||||
SMTP_SSL: "true"
|
||||
# nginx-ingress-controller has built in support for Websockets
|
||||
# Project: https://github.com/kubernetes/ingress-nginx
|
||||
WEBSOCKET_ENABLED: "true"
|
||||
# Where to store persistent data
|
||||
# make sure that this reflects the setting in StatefulSet, otherwise data might be lost
|
||||
DATA_FOLDER: "/data"
|
||||
# What domain is bitwarden going to be hosted on
|
||||
# This needs to reflect setting in ingress otherwise some 2FA methods might not work
|
||||
DOMAIN: "https://passwords.schick-web.site"
|
||||
# Number of workers to spin up for the service
|
||||
ROCKET_WORKERS: "1"
|
||||
# Show password hint instead of sending it via email
|
||||
SHOW_PASSWORD_HINT: "false"
|
||||
# Enable Vault interface, when disabled, only API is served
|
||||
WEB_VAULT_ENABLED: "true"
|
||||
# Port to serve http requests on
|
||||
# most likely no need to change this here, look at ingress configuration instead
|
||||
ROCKET_PORT: "8080"
|
||||
# Allow registration of new users
|
||||
SIGNUPS_ALLOWED: "false"
|
||||
# Allow current users invite new users even if registrations are otherwise disabled.
|
||||
# https://github.com/dani-garcia/bitwarden_rs/blob/master/README.md#disable-invitations
|
||||
INVITATIONS_ALLOWED: "false"
|
||||
LOG_FILE: "/data/bitwarden.log"
|
||||
53
k8s/core/bitwarden/deployment.yml
Normal file
53
k8s/core/bitwarden/deployment.yml
Normal file
@@ -0,0 +1,53 @@
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: bitwarden
|
||||
namespace: bitwarden
|
||||
labels:
|
||||
name: bitwarden
|
||||
spec:
|
||||
selector:
|
||||
matchLabels:
|
||||
pod: bitwarden
|
||||
replicas: 0
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
pod: bitwarden
|
||||
spec:
|
||||
serviceAccountName: bitwarden
|
||||
containers:
|
||||
- name: bitwarden
|
||||
image: vaultwarden/server:latest
|
||||
imagePullPolicy: Always
|
||||
env:
|
||||
- name: ADMIN_TOKEN
|
||||
valueFrom:
|
||||
secretKeyRef:
|
||||
name: bitwarden-admin-token
|
||||
key: token
|
||||
envFrom:
|
||||
- configMapRef:
|
||||
name: bitwarden
|
||||
ports:
|
||||
- containerPort: 8080
|
||||
name: bitwarden-http
|
||||
protocol: TCP
|
||||
- containerPort: 3012
|
||||
name: websocket
|
||||
protocol: TCP
|
||||
resources:
|
||||
limits:
|
||||
cpu: 300m
|
||||
memory: 1Gi
|
||||
requests:
|
||||
cpu: 50m
|
||||
memory: 256Mi
|
||||
volumeMounts:
|
||||
- mountPath: /bitwarden/data
|
||||
name: bitwarden-data
|
||||
readOnly: false
|
||||
volumes:
|
||||
- name: bitwarden-data
|
||||
persistentVolumeClaim:
|
||||
claimName: bitwarden-data-claim
|
||||
44
k8s/core/bitwarden/rbac.yml
Normal file
44
k8s/core/bitwarden/rbac.yml
Normal file
@@ -0,0 +1,44 @@
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: ServiceAccount
|
||||
metadata:
|
||||
name: bitwarden
|
||||
labels:
|
||||
app: bitwarden
|
||||
---
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: Role
|
||||
metadata:
|
||||
name: bitwarden
|
||||
rules:
|
||||
- apiGroups:
|
||||
- ""
|
||||
resources:
|
||||
- configmaps
|
||||
resourceNames:
|
||||
- "bitwarden"
|
||||
verbs:
|
||||
- get
|
||||
- apiGroups:
|
||||
- ""
|
||||
resources:
|
||||
- secrets
|
||||
resourceNames:
|
||||
- "bitwarden-admin-token"
|
||||
- "bitwarden-smtp"
|
||||
verbs:
|
||||
- get
|
||||
|
||||
---
|
||||
apiVersion: rbac.authorization.k8s.io/v1
|
||||
kind: RoleBinding
|
||||
metadata:
|
||||
name: bitwarden
|
||||
namespace: bitwarden
|
||||
roleRef:
|
||||
apiGroup: rbac.authorization.k8s.io
|
||||
kind: Role
|
||||
name: bitwarden
|
||||
subjects:
|
||||
- kind: ServiceAccount
|
||||
name: bitwarden
|
||||
17
k8s/core/bitwarden/service.yml
Normal file
17
k8s/core/bitwarden/service.yml
Normal file
@@ -0,0 +1,17 @@
|
||||
apiVersion: v1
|
||||
kind: Service
|
||||
metadata:
|
||||
name: bitwarden
|
||||
namespace: bitwarden
|
||||
spec:
|
||||
selector:
|
||||
app: bitwarden
|
||||
ports:
|
||||
- name: bitwarden-http
|
||||
port: 80
|
||||
protocol: TCP
|
||||
targetPort: 8080
|
||||
- name: websocket
|
||||
protocol: TCP
|
||||
port: 3012
|
||||
targetPort: 3012
|
||||
12
k8s/core/bitwarden/smtp-secret.yml
Normal file
12
k8s/core/bitwarden/smtp-secret.yml
Normal file
@@ -0,0 +1,12 @@
|
||||
---
|
||||
apiVersion: v1
|
||||
kind: Secret
|
||||
metadata:
|
||||
name: bitwarden-smtp
|
||||
namespace: bitwarden
|
||||
labels:
|
||||
app: bitwarden
|
||||
type: Opaque
|
||||
data:
|
||||
emailUser: dGhvbWFzLnNjaGlja0BtYWlsYm94Lm9yZwo=
|
||||
emailPassword: enk4OFBLWGpkam5rTG9WR0FhTXJnbmFocWZuSU9MRzYK
|
||||
66
k8s/core/bitwarden/statefulset.yml
Normal file
66
k8s/core/bitwarden/statefulset.yml
Normal file
@@ -0,0 +1,66 @@
|
||||
apiVersion: apps/v1
|
||||
kind: StatefulSet
|
||||
metadata:
|
||||
name: bitwarden
|
||||
namespace: bitwarden
|
||||
labels:
|
||||
name: bitwarden
|
||||
spec:
|
||||
serviceName: bitwarden
|
||||
replicas: 1
|
||||
selector:
|
||||
matchLabels:
|
||||
app: bitwarden
|
||||
updateStrategy:
|
||||
type: RollingUpdate
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: bitwarden
|
||||
spec:
|
||||
serviceAccountName: bitwarden
|
||||
containers:
|
||||
- name: bitwarden
|
||||
image: vaultwarden/server:1.34.3
|
||||
imagePullPolicy: IfNotPresent
|
||||
env:
|
||||
# - name: ADMIN_TOKEN
|
||||
# valueFrom:
|
||||
# secretKeyRef:
|
||||
# name: bitwarden-admin-token
|
||||
# key: token
|
||||
- name: SMTP_USERNAME
|
||||
valueFrom:
|
||||
secretKeyRef:
|
||||
name: bitwarden-smtp
|
||||
key: emailUser
|
||||
- name: SMTP_PASSWORD
|
||||
valueFrom:
|
||||
secretKeyRef:
|
||||
name: bitwarden-smtp
|
||||
key: emailPassword
|
||||
envFrom:
|
||||
- configMapRef:
|
||||
name: bitwarden
|
||||
ports:
|
||||
- containerPort: 8080
|
||||
name: http
|
||||
protocol: TCP
|
||||
- containerPort: 3012
|
||||
name: websocket
|
||||
protocol: TCP
|
||||
resources:
|
||||
limits:
|
||||
cpu: 300m
|
||||
memory: 1Gi
|
||||
requests:
|
||||
cpu: 50m
|
||||
memory: 256Mi
|
||||
volumeMounts:
|
||||
- mountPath: /data
|
||||
name: bitwarden-data
|
||||
readOnly: false
|
||||
volumes:
|
||||
- name: bitwarden-data
|
||||
persistentVolumeClaim:
|
||||
claimName: bitwarden-data-claim
|
||||
9
k8s/core/bitwarden/tls-cert.yml
Normal file
9
k8s/core/bitwarden/tls-cert.yml
Normal file
@@ -0,0 +1,9 @@
|
||||
apiVersion: v1
|
||||
kind: Secret
|
||||
metadata:
|
||||
name: bitwarden-kimchi-tls
|
||||
namespace: bitwarden
|
||||
type: kubernetes.io/tls
|
||||
data:
|
||||
tls.crt: "LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUZoVENDQTIwQ0ZFQlVhZEtqNFRCOHNPbDdMNW42a3ZsWTFYOFVNQTBHQ1NxR1NJYjNEUUVCQ3dVQU1IOHgKQ3pBSkJnTlZCQVlUQWtSRk1RMHdDd1lEVlFRSURBUm9iMjFsTVEwd0N3WURWUVFIREFSb2IyMWxNUmN3RlFZRApWUVFLREE1VFkyaHBZMnNnU0c5emRHbHVaekVQTUEwR0ExVUVBd3dHVkdodmJXRnpNU2d3SmdZSktvWklodmNOCkFRa0JGaGwwYUc5dFlYTXVjMk5vYVdOclFHMWhhV3hpYjNndWIzSm5NQjRYRFRJeU1USXhPVEUzTlRVeE0xb1gKRFRJek1USXhPVEUzTlRVeE0xb3dmekVMTUFrR0ExVUVCaE1DUkVVeERUQUxCZ05WQkFnTUJHaHZiV1V4RFRBTApCZ05WQkFjTUJHaHZiV1V4RnpBVkJnTlZCQW9NRGxOamFHbGpheUJJYjNOMGFXNW5NUTh3RFFZRFZRUUREQVpVCmFHOXRZWE14S0RBbUJna3Foa2lHOXcwQkNRRVdHWFJvYjIxaGN5NXpZMmhwWTJ0QWJXRnBiR0p2ZUM1dmNtY3cKZ2dJaU1BMEdDU3FHU0liM0RRRUJBUVVBQTRJQ0R3QXdnZ0lLQW9JQ0FRQ3lrUjBVMkZVNGJQYUZRaTZCbktnMwovOHY4Y29oeDZnTUdpQlAwVWVYVzErc2Z2Q2ExaTEvc1Vua2ZOam9DUU52V3JVNkY2M1o1VnArWEpaSlFTeG0zCnhSMjRsN2haYUpmYVdhY0dnZzU5U29mNEJLZFhsSVBIUHU0YkpHQ3RWMmI0VHp3OTZRMnFMaXpQcnYvemRtUEkKeUEwMVcrQmVKZlFWS2JjS2xiSTJtN2plNlJGQUdjcmRweEhWaElHMTJYTlMveFloRFNrNUpnblIvdU9PaDI4VwpqaEtPUW12ZUk0bWhQMzFuR09qYS8rS25ySzBwcGZ3ZlFFUENucXgvUytPWVJDUWdlTXdjVG41TGl6Y0lBVk5QClptV3haaWhlMDRhelJNTEVldG1hTW1HdEV3YnNnS1FBV1pBVGl3eXZST2ZVa2VkMG1pNXp1OUkwTEdacHZXMUgKVmtkK1o5VlJiZmE3aHRwRzVyWUEvSi8ycHE0ZHpoajVCM3h6T2MrTVBsSWk1MGRzT2VPMlhTZlB5WWJPdTdSVApISDBnaUVlOCtOOUx5OVdaRVpFdUpNUTZiNTdwaHRMUjVRcW5mN2RKWGU1TDJ3bm5PWkhGaG9OZjFlWjJudmJBCmJ2YWlIZUFWYVpYRXhSVEorTlRSamVHZU84NHpFZmIvcE1mWWNtTDZaY2xaanFRWUJGcGFMMllCcWQ1ODVMQ28KWldEQTR5ZDM2YkRsUGhDWGdzdnFUcmcxaitpMmJydEdUNXQyYk9KUE5ISTdxdXgxTmswSlNqcjBKTnd6Kys2MApyRG1HRmduR1hYQW9SOGpOOGRKU0gyVXIzWXhHck1LWnZiQzFTZ2lYRkF3eE5uMDhkL25qYmJyRWk0N2krQlFkCitaVmFlN3djMHpIYWkra2VPOFRwcHdJREFRQUJNQTBHQ1NxR1NJYjNEUUVCQ3dVQUE0SUNBUUNDVWs5VWNKSjUKb21oQmVnelZSOTZIaTZzZDRvYzRQeUJHLzc5UzhvMk5LUXRsenhiR2JJaXAvYXVrZ3VMZ2lFbk4wMDlXT1c5dQpwVVNkaitCK1pmbzE3QWtldU9sOTBrTUVtaER6Z05IUVF2c2RtWVlSalRlRHN6Q0hrZlU2OXhEL0tiT3NpYnluClkyWTZLMWpRS3ROK0lxSUJxYzVsa1ZvUWRmOWExbXFydUQ5MUwwQVdpRTVtRHJwWnJ1WDUyT3NVYkphb3BqamUKUTVWUXYrWGZLcWxyZ1R3NmFWbFIwUlhCOWRXRVNieHlVTUZXQUhUL0JjbFppY2xuYy9oUjhCN0Z4bHhsMWttMAo4QXpkakVhNmpNOHIxdlZEOS9PUFkvY3owNzUxbjY2Sm1pUjYrTWJBN1lBbWJLRzFoUC8raDF4N0lCVUUvVjE1CkIxTnRWbDNpeEN2V2xXOGFJRWh5K0hzSm5JMllnYVNrVVlxbDg5eWRuWEljMEh6UEhVWVZ1UmRLYmNJNXFuRjcKTVl2N25FbmtndFRscEF4VEY4c2ZOaTJOM2JieTh3QWxqSlQ0aWIvWE12ais5eldDMldnMXoxVVJXdmx6bUEvdwpGSlhnR3NPL3dJVml3VlRES1h4c1cwVVpJc3kzUUtOUWxIdWdLRjdVbDFxcHZGbk52c1NPczZnTnJCcjF6anNJCmZLa3NuUlEzSlk3N01PNGFHNzVOb0Vnd0c0WWtDNVYwK01JWUFJK3gxNmlycDRGNlVGMDhYdzZ4RjlqMGhseGMKenB0RW5WVE5qZDB4ZjlDYVhBa2NaM1RuRHlJSFBUbGhRWmNCbkw5MkRpTFBDWTM2MTREL2p6aHh4M3o1bzlnZApOVzh0MW1ncDFLMlU2WlJwdERaQWwrbVRkY1ZiTnRtZEJRPT0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo="
|
||||
tls.key: "LS0tLS1CRUdJTiBQUklWQVRFIEtFWS0tLS0tCk1JSUpRUUlCQURBTkJna3Foa2lHOXcwQkFRRUZBQVNDQ1Nzd2dna25BZ0VBQW9JQ0FRQ3lrUjBVMkZVNGJQYUYKUWk2Qm5LZzMvOHY4Y29oeDZnTUdpQlAwVWVYVzErc2Z2Q2ExaTEvc1Vua2ZOam9DUU52V3JVNkY2M1o1VnArWApKWkpRU3htM3hSMjRsN2haYUpmYVdhY0dnZzU5U29mNEJLZFhsSVBIUHU0YkpHQ3RWMmI0VHp3OTZRMnFMaXpQCnJ2L3pkbVBJeUEwMVcrQmVKZlFWS2JjS2xiSTJtN2plNlJGQUdjcmRweEhWaElHMTJYTlMveFloRFNrNUpnblIKL3VPT2gyOFdqaEtPUW12ZUk0bWhQMzFuR09qYS8rS25ySzBwcGZ3ZlFFUENucXgvUytPWVJDUWdlTXdjVG41TAppemNJQVZOUFptV3haaWhlMDRhelJNTEVldG1hTW1HdEV3YnNnS1FBV1pBVGl3eXZST2ZVa2VkMG1pNXp1OUkwCkxHWnB2VzFIVmtkK1o5VlJiZmE3aHRwRzVyWUEvSi8ycHE0ZHpoajVCM3h6T2MrTVBsSWk1MGRzT2VPMlhTZlAKeVliT3U3UlRISDBnaUVlOCtOOUx5OVdaRVpFdUpNUTZiNTdwaHRMUjVRcW5mN2RKWGU1TDJ3bm5PWkhGaG9OZgoxZVoybnZiQWJ2YWlIZUFWYVpYRXhSVEorTlRSamVHZU84NHpFZmIvcE1mWWNtTDZaY2xaanFRWUJGcGFMMllCCnFkNTg1TENvWldEQTR5ZDM2YkRsUGhDWGdzdnFUcmcxaitpMmJydEdUNXQyYk9KUE5ISTdxdXgxTmswSlNqcjAKSk53eisrNjByRG1HRmduR1hYQW9SOGpOOGRKU0gyVXIzWXhHck1LWnZiQzFTZ2lYRkF3eE5uMDhkL25qYmJyRQppNDdpK0JRZCtaVmFlN3djMHpIYWkra2VPOFRwcHdJREFRQUJBb0lDQUE0Vnp6N3hMNHV6WUtDelR6clBFaC9VCnNCRDZNZFFXZXVXZGYwRnk0bGZYa1ViZ0R3ZWI5bFdNVGR5TjZQWjdpanU5VU9mVVluU0F4amJrY2syZUZ1bTkKaFRJbDJaZEgzazZOYXRUakZtU0FxQWdDeWZacEV4bjQxMHhSeXNSeGxBQTdNOHZJWWRrT0tsKzVkSndPTnlIRAowZkxuQytReFJ6Y1NJc3VWY2tqSGNNWTRpVEZPdDRkVFlkOC9SQUlGcElpajhXbHJBZGp0ckxHaFV1N1B0UTRJCjUrTEx6M2xtd1RqMGFwNWl3eDlmTnRBMkdUU2pVS2RnYU5mbHIwS2RTRmNlN09DWTFyVFc1alJzUjB0ODIwYjAKazBueWZuVzRaclFtVk55dHVoTmxMUG83ZVh6WFN0aHlPQ0NxZzdZaGo4M2ZNbXdxcTBaYXh1SnhyQ1JrR2tYMwozZGEyUWJSV1VvSElGZmJPcjNTTk9UTmhpVmU4WEVNcGdRTm9DR1UzeGpwNWFPdy83UDNRa1FaYytrNnVqRmU1CnBsU2Q2emFLZlA1TlluN0dSUzExNGdPL05lRUxkY1ZYSmxsZTRWcEtLNDI3UlZkWlpGS2JaMW1uTTBENU5qN0gKM2F5THdMYmFLYUxFb1lkVUJuKzdOOTRwaFcrVWZvcWF6VU1VZGIyK0NQRittay9zWWl2QmJjZm5UYXVrbHRFaQpsNjF2TlpSTE1MNzZyTXUrMGc4Q1k1QWFSRDFrREJWbFVMbWpJREFINEpvOExWS1dBR05XVHVRK0V0Mk15NkJ3CnhYbWhGYnVteTBDVmFMWXhaR0dSREdpR01YRFFyenJVa2pWY3M5enZRdG80ck80R2RYLzBNUGZabzdaQjFsdnMKcGRDcm5iS2V4YXlPS2VwZEtYclJBb0lCQVFEQnVmSUllZi9EZ0dGbnJWeGhNZEl2UEM0UUVlOXEydjdoUTRWNQphSkQyTGVqRTA4Zy9MbFN6eU5Fd1I3dFFQdG5hdFNkSEttSXVzSXlFVXJmanVyREplOExoWVRobCtCSG1Sa2hwClJ5T1lYYTV5cnZNUk5oSkYweVhFK1MwVko5RmJ2OE5iTHNmdTBRMnlRM0VrdjBBR2c0VmQ3cm9kT2dFMlVRTTgKM1Fib2xWSk1LVkFMWUdRVTJZVWdtUjR3TnRoNVhmSG5uVWZWZnJMTDhER214cU8xY29tbWo1eFdvc2JmSkVCZQpTL0ZtMFdEZE1iZmxHNWp0K0JuZGw5YmpobS8vR1pUaGVmSThQeHdVVVVjekxqbUE2bTd1QUxiSmFwZUl5MWR6CmkyeUZHalV0a2E3aWVBRlBGcjhpZ3E1OCtBL3pEUXJudHZHZmhvbnVZSytyMm0zUkFvSUJBUURyOTZzZTFhNXEKOVlSMU5OaEQvSnRSdVFkMzhnTkU5Yi9US3FWeXcwYlh3WGd0aGxvRzRQa1lPM2R1SjR1ZGJMSGloamlVWko0Zwo4UzJMWFloZjFQZkZJc3k0azAwdGZkVlJhSnVLclpoUUxyMjBSSW9ZclJ6NEpGT1hnV1dMVlNsYkpXc1ZJZTc1CmNiN296eDdjZGo3M1ZJVk1oZlUxVHRRNDhEY1ovNUMwand3QzdiNU9aVmRieERUR21XOVV6WVJJWUptMERpTVEKODFKS3VBTFdBM093dmR3blZPamJGamtBR0pRVWxvNkhacEFKbVFiOUczdjJvZHRMbUVWdzVKcTVkYkZ4bW9OMQpUdVBvVkIwV0JIM0ZDSEIrbzNTbEt1czJ3RWRKQzQ3SWY5MGJnZ05zMzdyMnRLaUEwSllvdWFyN2s0K3Bmb0pDCiswQXVybDBnZWVYM0FvSUJBQkRFOE0rTkIvZTdZRE1pVFpIWVJ6SnhpaWMzOWpxUXRHbDVkODlYbnR6QWdwcXYKSG5GaHFGRmJ1OGZySGFySGxnSVpsa25Sd0dmOFBsMmg3MnNXR1FHSDVnbXVhYnhoNmVLK0NMeWNQTmVPbkhBdQo5cmx4cmNrL2l0QnZKVmprZG5uenNveHRFejkzOXpDTUovb2ZXQUo0Vmc0WWdTSFFpSlJVRk95cTBWTkd3Ylg0CkNZYkNsRWM5d1FsZVY3K2lyOEJwd212ak1IbXBtdjZPVHkzNW1lZzEvdlpkRGhKdHlyczhIeHBLaHAzNDErS3QKMEJaVCtqdjNNdjM4aTh4c01idXFVam5tWFhLYm4rbWVVNFI1cHQ2aTdIRkx2SWJZNUQ4aUl2TE1pZHRIdG55NApWS1FqOFRFUWJnRWx3TWJ1amlyaTRTUEhzVWkyMDYrL3pOVWFkbEVDZ2dFQVQ5YUdnWEpQQjBWMndhbFZtdnMzCkdobCttMmk1RG5ZUHAwVUFvYW5NcUdkL1gvZmJNZ1NnZzBCcmtrdXpBMXFwZlRsb24xekQyK1YrUVc4dUd5NlYKZm8vZHNIMjJXVFFBSHdGRHoxSVkvTmd6dTNDTlFQZ2hteHUvWkwyVk8rVmVqc1pFU3V3bVRTUGRNaXdTQnduRApHQy95d2dkNUJjWmNLNytyQlJMaFJSWTVQQ3h3ZlZud2lzNENCVWdZMFJxUUxXVHgzR2dFR1ZJYWY4bHV2RGFDCnlFUVUzd0h0bjJNUGVpYld3M2lGVk82d3dXNlRYTVFWMTBiQVNmMkZVVU1uenFReG0zeHFDaURkSDlpRjF0TkMKTW80K2NicWdWdG9FcDR1N0VzM2tTNVpubTAwTUY2UkRRQUcwR1pGNW5PSGxKaVd4TCtucjdQblJwM203YktlUgoxUUtDQVFBM1hKMXpSRnpKTnM5b1ptanU2RzVmeHIxME9JdlA1bVcwVm1hVEQzeE1qTnRBZ0dzVHo4TnNab1hpCmlFQXZ0ZTVpQXowUldyUkNoVjNMaWRveGFUQjhnSnZqNVkzZHZjTThBMHFjWEw3QjQzeDhuaG9RUjdxWkNtREoKdXZpR0hKSzlIU05OZnlFWk1TQzNIa1M3T216dWFIWXBTWUhzdndYK2MySTZ3VzMwb0pydkNSQ1BYMERFWCtVVgpsL2FxbFBGZlR3RU1COXBKaE9FSTdyU3crSHNmaGJoaFRKV043UkFCemJuSkZpMFhSdVFpSUljd1ZzY0hlbWpmCkFsUDF3OThjT1NJMmdzZzU3OFlQWHdrZnlvekJJeWVVZFNQWEFBdmozZlF1bnhPNUdORzhWUlo5UlBGU2ZEOU8KdVhTbzllVEJ5NFZIblM5SjJiZkpCV1FQd3F0cQotLS0tLUVORCBQUklWQVRFIEtFWS0tLS0tCg=="
|
||||
52
k8s/core/bitwarden/traefik.yml
Normal file
52
k8s/core/bitwarden/traefik.yml
Normal file
@@ -0,0 +1,52 @@
|
||||
apiVersion: networking.k8s.io/v1
|
||||
kind: Ingress
|
||||
metadata:
|
||||
name: bitwarden
|
||||
namespace: bitwarden
|
||||
annotations:
|
||||
kubernetes.io/ingress.class: "traefik"
|
||||
traefik.ingress.kubernetes.io/router.middlewares: default-security-headers@kubernetescrd,default-rate-limit@kubernetescrd
|
||||
spec:
|
||||
rules:
|
||||
- host: passwords.kimchi
|
||||
http:
|
||||
paths:
|
||||
- path: /
|
||||
pathType: Prefix
|
||||
backend:
|
||||
service:
|
||||
name: bitwarden
|
||||
port:
|
||||
name: bitwarden-http
|
||||
- path: /notifications/hub
|
||||
pathType: Exact
|
||||
backend:
|
||||
service:
|
||||
name: bitwarden
|
||||
port:
|
||||
name: websocket
|
||||
- host: passwords.schick-web.site
|
||||
http:
|
||||
paths:
|
||||
- path: /
|
||||
pathType: Prefix
|
||||
backend:
|
||||
service:
|
||||
name: bitwarden
|
||||
port:
|
||||
name: bitwarden-http
|
||||
- path: /notifications/hub
|
||||
pathType: Exact
|
||||
backend:
|
||||
service:
|
||||
name: bitwarden
|
||||
port:
|
||||
name: websocket
|
||||
tls:
|
||||
- hosts:
|
||||
- passwords.kimchi
|
||||
secretName: bitwarden-kimchi-tls
|
||||
- hosts:
|
||||
- passwords.schick-web.site
|
||||
secretName: passwords-schick-web-site-tls
|
||||
|
||||
5529
k8s/core/cert-manager/cert-manager.yml
Normal file
5529
k8s/core/cert-manager/cert-manager.yml
Normal file
File diff suppressed because it is too large
Load Diff
46
k8s/core/cert-manager/letsencrypt-issuer.yml
Normal file
46
k8s/core/cert-manager/letsencrypt-issuer.yml
Normal file
@@ -0,0 +1,46 @@
|
||||
apiVersion: cert-manager.io/v1
|
||||
kind: ClusterIssuer
|
||||
metadata:
|
||||
name: letsencrypt-staging
|
||||
namespace: cert-manager
|
||||
spec:
|
||||
acme:
|
||||
# The ACME server URL
|
||||
server: https://acme-staging-v02.api.letsencrypt.org/directory
|
||||
# EMail address used for ACME registration
|
||||
email: thomas.schick@mailbox.org
|
||||
# Name of a secret to store the ACME account private key
|
||||
privateKeySecretRef:
|
||||
name: letsencrypt-staging
|
||||
|
||||
# Enable HTTP01 challenge provider using traefik
|
||||
solvers:
|
||||
- http01:
|
||||
ingress:
|
||||
class: traefik
|
||||
|
||||
---
|
||||
|
||||
apiVersion: cert-manager.io/v1
|
||||
kind: ClusterIssuer
|
||||
metadata:
|
||||
name: letsencrypt-prod
|
||||
namespace: cert-manager
|
||||
spec:
|
||||
acme:
|
||||
# The ACME server URL
|
||||
server: https://acme-v02.api.letsencrypt.org/directory
|
||||
# EMail address used for ACME registration
|
||||
email: thomas.schick@mailbox.org
|
||||
# Name of a secret to store the ACME account private key
|
||||
privateKeySecretRef:
|
||||
name: letsencrypt-prod
|
||||
|
||||
# Use DNS01 challenge via Cloudflare (HTTP01 doesn't work behind CF Tunnel)
|
||||
solvers:
|
||||
- dns01:
|
||||
cloudflare:
|
||||
email: thomas.schick@mailbox.org
|
||||
apiTokenSecretRef:
|
||||
name: cloudflare-api-token
|
||||
key: api-token
|
||||
21
k8s/core/cert-manager/self-signed-issuer.yml
Normal file
21
k8s/core/cert-manager/self-signed-issuer.yml
Normal file
@@ -0,0 +1,21 @@
|
||||
apiVersion: cert-manager.io/v1
|
||||
kind: ClusterIssuer
|
||||
metadata:
|
||||
name: cluster-issuer-selfsigned
|
||||
namespace: cert-manager
|
||||
spec:
|
||||
selfSigned: {}
|
||||
---
|
||||
apiVersion: cert-manager.io/v1
|
||||
kind: Certificate
|
||||
metadata:
|
||||
name: selfsigned-cert
|
||||
namespace: cert-manager
|
||||
spec:
|
||||
commonName: local-ca
|
||||
dnsNames:
|
||||
- kino.kimchi
|
||||
- passwords.kimchi
|
||||
secretName: selfsigned-cert-tls
|
||||
issuerRef:
|
||||
name: cluster-issuer-selfsigned
|
||||
4
k8s/core/cloudflare-tunnel/_namespace.yml
Normal file
4
k8s/core/cloudflare-tunnel/_namespace.yml
Normal file
@@ -0,0 +1,4 @@
|
||||
apiVersion: v1
|
||||
kind: Namespace
|
||||
metadata:
|
||||
name: cloudflare
|
||||
96
k8s/core/cloudflare-tunnel/cloudflared.yml
Normal file
96
k8s/core/cloudflare-tunnel/cloudflared.yml
Normal file
@@ -0,0 +1,96 @@
|
||||
---
|
||||
apiVersion: apps/v1
|
||||
kind: Deployment
|
||||
metadata:
|
||||
name: cloudflared
|
||||
namespace: cloudflare
|
||||
spec:
|
||||
selector:
|
||||
matchLabels:
|
||||
app: cloudflared
|
||||
replicas: 1 # You could also consider elastic scaling for this deployment
|
||||
template:
|
||||
metadata:
|
||||
labels:
|
||||
app: cloudflared
|
||||
spec:
|
||||
containers:
|
||||
- name: cloudflared
|
||||
image: cloudflare/cloudflared:2026.1.1
|
||||
args:
|
||||
- tunnel
|
||||
# Points cloudflared to the config file, which configures what
|
||||
# cloudflared will actually do. This file is created by a ConfigMap
|
||||
# below.
|
||||
- --config
|
||||
- /etc/cloudflared/config/config.yaml
|
||||
- run
|
||||
livenessProbe:
|
||||
httpGet:
|
||||
# Cloudflared has a /ready endpoint which returns 200 if and only if
|
||||
# it has an active connection to the edge.
|
||||
path: /ready
|
||||
port: 2000
|
||||
failureThreshold: 1
|
||||
initialDelaySeconds: 10
|
||||
periodSeconds: 10
|
||||
volumeMounts:
|
||||
- name: config
|
||||
mountPath: /etc/cloudflared/config
|
||||
readOnly: true
|
||||
# Each tunnel has an associated "credentials file" which authorizes machines
|
||||
# to run the tunnel. cloudflared will read this file from its local filesystem,
|
||||
# and it'll be stored in a k8s secret.
|
||||
- name: creds
|
||||
mountPath: /etc/cloudflared/creds
|
||||
readOnly: true
|
||||
volumes:
|
||||
- name: creds
|
||||
secret:
|
||||
# By default, the credentials file will be created under ~/.cloudflared/<tunnel ID>.json
|
||||
# when you run `cloudflared tunnel create`. You can move it into a secret by using:
|
||||
# ```sh
|
||||
# kubectl create secret generic tunnel-credentials \
|
||||
# --from-file=credentials.json=/Users/yourusername/.cloudflared/<tunnel ID>.json
|
||||
# ```
|
||||
secretName: tunnel-credentials
|
||||
# Create a config.yaml file from the ConfigMap below.
|
||||
- name: config
|
||||
configMap:
|
||||
name: cloudflared
|
||||
items:
|
||||
- key: config.yaml
|
||||
path: config.yaml
|
||||
---
|
||||
# This ConfigMap is just a way to define the cloudflared config.yaml file in k8s.
|
||||
# It's useful to define it in k8s, rather than as a stand-alone .yaml file, because
|
||||
# this lets you use various k8s templating solutions (e.g. Helm charts) to
|
||||
# parameterize your config, instead of just using string literals.
|
||||
apiVersion: v1
|
||||
kind: ConfigMap
|
||||
metadata:
|
||||
name: cloudflared
|
||||
namespace: cloudflare
|
||||
data:
|
||||
config.yaml: |
|
||||
tunnel: web.site-tunnel
|
||||
credentials-file: /etc/cloudflared/creds/credentials.json
|
||||
metrics: 0.0.0.0:2000
|
||||
no-autoupdate: true
|
||||
protocol: http2
|
||||
|
||||
ingress:
|
||||
# Nextcloud gets direct routing (bypasses Traefik)
|
||||
- hostname: cloud.schick-web.site
|
||||
service: http://nextcloud.nextcloud.svc.cluster.local:8080
|
||||
originRequest:
|
||||
connectTimeout: 30s
|
||||
keepAliveConnections: 100
|
||||
keepAliveTimeout: 90s
|
||||
# All other services go through Traefik
|
||||
- hostname: "*.schick-web.site"
|
||||
service: http://192.168.178.55:80
|
||||
originRequest:
|
||||
noTLSVerify: true
|
||||
# Catch-all returns 404
|
||||
- service: http_status:404
|
||||
18
k8s/core/cloudflare-tunnel/cloudflared_config.yml
Normal file
18
k8s/core/cloudflare-tunnel/cloudflared_config.yml
Normal file
@@ -0,0 +1,18 @@
|
||||
tunnel: web.site-tunnel
|
||||
credentials-file: .cloudflared/e0000000-e650-4190-0000-19c97abb503b.json
|
||||
ingress:
|
||||
# Rules map traffic from a hostname to a local service:
|
||||
- hostname: example.com
|
||||
service: https://localhost:8000
|
||||
# Rules can match the request's path to a regular expression:
|
||||
- hostname: static.example.com
|
||||
path: /images/*\.(jpg|png|gif)
|
||||
service: https://machine1.local:3000
|
||||
# Rules can match the request's hostname to a wildcard character:
|
||||
- hostname: "*.ssh.foo.com"
|
||||
service: ssh://localhost:2222
|
||||
# You can map traffic to the built-in “Hello World” test server:
|
||||
- hostname: foo.com
|
||||
service: hello_world
|
||||
# This “catch-all” rule doesn’t have a hostname/path, so it matches everything
|
||||
- service: http_status:404
|
||||
1022
k8s/core/homarr/pre-1-0-config.json
Normal file
1022
k8s/core/homarr/pre-1-0-config.json
Normal file
File diff suppressed because it is too large
Load Diff
16
k8s/core/homarr/readme.md
Normal file
16
k8s/core/homarr/readme.md
Normal file
@@ -0,0 +1,16 @@
|
||||
# HELM
|
||||
```
|
||||
helm repo add homarr https://oben01.github.io/dmz-charts/charts/homarr
|
||||
```
|
||||
|
||||
|
||||
|
||||
## Install command used:
|
||||
```
|
||||
helm install homarr homarr/homarr --namespace homarr --create-namespace --values=k8s/homarr/values.yaml
|
||||
```
|
||||
|
||||
|
||||
## Docs
|
||||
|
||||
https://homarr.dev/docs/introduction
|
||||
47
k8s/core/homarr/values.yaml
Normal file
47
k8s/core/homarr/values.yaml
Normal file
@@ -0,0 +1,47 @@
|
||||
# image:
|
||||
# Overrides the image tag whose default is the chart appVersion.
|
||||
# tag: "1.32.0"
|
||||
service:
|
||||
enabled: true
|
||||
ingress:
|
||||
enabled: true
|
||||
className: traefik
|
||||
|
||||
hosts:
|
||||
- host: dashboard.kimchi
|
||||
paths:
|
||||
- path: /
|
||||
pathType: Prefix
|
||||
- host: homarr.kimchi
|
||||
paths:
|
||||
- path: /
|
||||
pathType: Prefix
|
||||
|
||||
persistence:
|
||||
- name: homarr-config
|
||||
enabled: true
|
||||
storageClassName: "local-path"
|
||||
accessMode: "ReadWriteOnce"
|
||||
size: "50Mi"
|
||||
mountPath: "/app/data/configs"
|
||||
- name: homarr-database
|
||||
enabled: true
|
||||
storageClassName: "local-path"
|
||||
accessMode: "ReadWriteOnce"
|
||||
size: "50Mi"
|
||||
mountPath: "/app/database"
|
||||
- name: homarr-icons
|
||||
enabled: true
|
||||
storageClassName: "local-path"
|
||||
accessMode: "ReadWriteOnce"
|
||||
size: "50Mi"
|
||||
mountPath: "/app/public/icons"
|
||||
|
||||
# Add resource limits to prevent memory/CPU issues
|
||||
resources:
|
||||
limits:
|
||||
cpu: 500m
|
||||
memory: 768Mi
|
||||
requests:
|
||||
cpu: 100m
|
||||
memory: 256Mi
|
||||
9
k8s/core/tls/kimchi-tls.yaml
Normal file
9
k8s/core/tls/kimchi-tls.yaml
Normal file
@@ -0,0 +1,9 @@
|
||||
apiVersion: v1
|
||||
kind: Secret
|
||||
metadata:
|
||||
name: bitwarden-tls-production
|
||||
namespace: bitwarden
|
||||
type: kubernetes.io/tls
|
||||
data:
|
||||
tls.crt: "LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUZoVENDQTIwQ0ZFQlVhZEtqNFRCOHNPbDdMNW42a3ZsWTFYOFVNQTBHQ1NxR1NJYjNEUUVCQ3dVQU1IOHgKQ3pBSkJnTlZCQVlUQWtSRk1RMHdDd1lEVlFRSURBUm9iMjFsTVEwd0N3WURWUVFIREFSb2IyMWxNUmN3RlFZRApWUVFLREE1VFkyaHBZMnNnU0c5emRHbHVaekVQTUEwR0ExVUVBd3dHVkdodmJXRnpNU2d3SmdZSktvWklodmNOCkFRa0JGaGwwYUc5dFlYTXVjMk5vYVdOclFHMWhhV3hpYjNndWIzSm5NQjRYRFRJeU1USXhPVEUzTlRVeE0xb1gKRFRJek1USXhPVEUzTlRVeE0xb3dmekVMTUFrR0ExVUVCaE1DUkVVeERUQUxCZ05WQkFnTUJHaHZiV1V4RFRBTApCZ05WQkFjTUJHaHZiV1V4RnpBVkJnTlZCQW9NRGxOamFHbGpheUJJYjNOMGFXNW5NUTh3RFFZRFZRUUREQVpVCmFHOXRZWE14S0RBbUJna3Foa2lHOXcwQkNRRVdHWFJvYjIxaGN5NXpZMmhwWTJ0QWJXRnBiR0p2ZUM1dmNtY3cKZ2dJaU1BMEdDU3FHU0liM0RRRUJBUVVBQTRJQ0R3QXdnZ0lLQW9JQ0FRQ3lrUjBVMkZVNGJQYUZRaTZCbktnMwovOHY4Y29oeDZnTUdpQlAwVWVYVzErc2Z2Q2ExaTEvc1Vua2ZOam9DUU52V3JVNkY2M1o1VnArWEpaSlFTeG0zCnhSMjRsN2haYUpmYVdhY0dnZzU5U29mNEJLZFhsSVBIUHU0YkpHQ3RWMmI0VHp3OTZRMnFMaXpQcnYvemRtUEkKeUEwMVcrQmVKZlFWS2JjS2xiSTJtN2plNlJGQUdjcmRweEhWaElHMTJYTlMveFloRFNrNUpnblIvdU9PaDI4VwpqaEtPUW12ZUk0bWhQMzFuR09qYS8rS25ySzBwcGZ3ZlFFUENucXgvUytPWVJDUWdlTXdjVG41TGl6Y0lBVk5QClptV3haaWhlMDRhelJNTEVldG1hTW1HdEV3YnNnS1FBV1pBVGl3eXZST2ZVa2VkMG1pNXp1OUkwTEdacHZXMUgKVmtkK1o5VlJiZmE3aHRwRzVyWUEvSi8ycHE0ZHpoajVCM3h6T2MrTVBsSWk1MGRzT2VPMlhTZlB5WWJPdTdSVApISDBnaUVlOCtOOUx5OVdaRVpFdUpNUTZiNTdwaHRMUjVRcW5mN2RKWGU1TDJ3bm5PWkhGaG9OZjFlWjJudmJBCmJ2YWlIZUFWYVpYRXhSVEorTlRSamVHZU84NHpFZmIvcE1mWWNtTDZaY2xaanFRWUJGcGFMMllCcWQ1ODVMQ28KWldEQTR5ZDM2YkRsUGhDWGdzdnFUcmcxaitpMmJydEdUNXQyYk9KUE5ISTdxdXgxTmswSlNqcjBKTnd6Kys2MApyRG1HRmduR1hYQW9SOGpOOGRKU0gyVXIzWXhHck1LWnZiQzFTZ2lYRkF3eE5uMDhkL25qYmJyRWk0N2krQlFkCitaVmFlN3djMHpIYWkra2VPOFRwcHdJREFRQUJNQTBHQ1NxR1NJYjNEUUVCQ3dVQUE0SUNBUUNDVWs5VWNKSjUKb21oQmVnelZSOTZIaTZzZDRvYzRQeUJHLzc5UzhvMk5LUXRsenhiR2JJaXAvYXVrZ3VMZ2lFbk4wMDlXT1c5dQpwVVNkaitCK1pmbzE3QWtldU9sOTBrTUVtaER6Z05IUVF2c2RtWVlSalRlRHN6Q0hrZlU2OXhEL0tiT3NpYnluClkyWTZLMWpRS3ROK0lxSUJxYzVsa1ZvUWRmOWExbXFydUQ5MUwwQVdpRTVtRHJwWnJ1WDUyT3NVYkphb3BqamUKUTVWUXYrWGZLcWxyZ1R3NmFWbFIwUlhCOWRXRVNieHlVTUZXQUhUL0JjbFppY2xuYy9oUjhCN0Z4bHhsMWttMAo4QXpkakVhNmpNOHIxdlZEOS9PUFkvY3owNzUxbjY2Sm1pUjYrTWJBN1lBbWJLRzFoUC8raDF4N0lCVUUvVjE1CkIxTnRWbDNpeEN2V2xXOGFJRWh5K0hzSm5JMllnYVNrVVlxbDg5eWRuWEljMEh6UEhVWVZ1UmRLYmNJNXFuRjcKTVl2N25FbmtndFRscEF4VEY4c2ZOaTJOM2JieTh3QWxqSlQ0aWIvWE12ais5eldDMldnMXoxVVJXdmx6bUEvdwpGSlhnR3NPL3dJVml3VlRES1h4c1cwVVpJc3kzUUtOUWxIdWdLRjdVbDFxcHZGbk52c1NPczZnTnJCcjF6anNJCmZLa3NuUlEzSlk3N01PNGFHNzVOb0Vnd0c0WWtDNVYwK01JWUFJK3gxNmlycDRGNlVGMDhYdzZ4RjlqMGhseGMKenB0RW5WVE5qZDB4ZjlDYVhBa2NaM1RuRHlJSFBUbGhRWmNCbkw5MkRpTFBDWTM2MTREL2p6aHh4M3o1bzlnZApOVzh0MW1ncDFLMlU2WlJwdERaQWwrbVRkY1ZiTnRtZEJRPT0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo="
|
||||
tls.key: "LS0tLS1CRUdJTiBQUklWQVRFIEtFWS0tLS0tCk1JSUpRUUlCQURBTkJna3Foa2lHOXcwQkFRRUZBQVNDQ1Nzd2dna25BZ0VBQW9JQ0FRQ3lrUjBVMkZVNGJQYUYKUWk2Qm5LZzMvOHY4Y29oeDZnTUdpQlAwVWVYVzErc2Z2Q2ExaTEvc1Vua2ZOam9DUU52V3JVNkY2M1o1VnArWApKWkpRU3htM3hSMjRsN2haYUpmYVdhY0dnZzU5U29mNEJLZFhsSVBIUHU0YkpHQ3RWMmI0VHp3OTZRMnFMaXpQCnJ2L3pkbVBJeUEwMVcrQmVKZlFWS2JjS2xiSTJtN2plNlJGQUdjcmRweEhWaElHMTJYTlMveFloRFNrNUpnblIKL3VPT2gyOFdqaEtPUW12ZUk0bWhQMzFuR09qYS8rS25ySzBwcGZ3ZlFFUENucXgvUytPWVJDUWdlTXdjVG41TAppemNJQVZOUFptV3haaWhlMDRhelJNTEVldG1hTW1HdEV3YnNnS1FBV1pBVGl3eXZST2ZVa2VkMG1pNXp1OUkwCkxHWnB2VzFIVmtkK1o5VlJiZmE3aHRwRzVyWUEvSi8ycHE0ZHpoajVCM3h6T2MrTVBsSWk1MGRzT2VPMlhTZlAKeVliT3U3UlRISDBnaUVlOCtOOUx5OVdaRVpFdUpNUTZiNTdwaHRMUjVRcW5mN2RKWGU1TDJ3bm5PWkhGaG9OZgoxZVoybnZiQWJ2YWlIZUFWYVpYRXhSVEorTlRSamVHZU84NHpFZmIvcE1mWWNtTDZaY2xaanFRWUJGcGFMMllCCnFkNTg1TENvWldEQTR5ZDM2YkRsUGhDWGdzdnFUcmcxaitpMmJydEdUNXQyYk9KUE5ISTdxdXgxTmswSlNqcjAKSk53eisrNjByRG1HRmduR1hYQW9SOGpOOGRKU0gyVXIzWXhHck1LWnZiQzFTZ2lYRkF3eE5uMDhkL25qYmJyRQppNDdpK0JRZCtaVmFlN3djMHpIYWkra2VPOFRwcHdJREFRQUJBb0lDQUE0Vnp6N3hMNHV6WUtDelR6clBFaC9VCnNCRDZNZFFXZXVXZGYwRnk0bGZYa1ViZ0R3ZWI5bFdNVGR5TjZQWjdpanU5VU9mVVluU0F4amJrY2syZUZ1bTkKaFRJbDJaZEgzazZOYXRUakZtU0FxQWdDeWZacEV4bjQxMHhSeXNSeGxBQTdNOHZJWWRrT0tsKzVkSndPTnlIRAowZkxuQytReFJ6Y1NJc3VWY2tqSGNNWTRpVEZPdDRkVFlkOC9SQUlGcElpajhXbHJBZGp0ckxHaFV1N1B0UTRJCjUrTEx6M2xtd1RqMGFwNWl3eDlmTnRBMkdUU2pVS2RnYU5mbHIwS2RTRmNlN09DWTFyVFc1alJzUjB0ODIwYjAKazBueWZuVzRaclFtVk55dHVoTmxMUG83ZVh6WFN0aHlPQ0NxZzdZaGo4M2ZNbXdxcTBaYXh1SnhyQ1JrR2tYMwozZGEyUWJSV1VvSElGZmJPcjNTTk9UTmhpVmU4WEVNcGdRTm9DR1UzeGpwNWFPdy83UDNRa1FaYytrNnVqRmU1CnBsU2Q2emFLZlA1TlluN0dSUzExNGdPL05lRUxkY1ZYSmxsZTRWcEtLNDI3UlZkWlpGS2JaMW1uTTBENU5qN0gKM2F5THdMYmFLYUxFb1lkVUJuKzdOOTRwaFcrVWZvcWF6VU1VZGIyK0NQRittay9zWWl2QmJjZm5UYXVrbHRFaQpsNjF2TlpSTE1MNzZyTXUrMGc4Q1k1QWFSRDFrREJWbFVMbWpJREFINEpvOExWS1dBR05XVHVRK0V0Mk15NkJ3CnhYbWhGYnVteTBDVmFMWXhaR0dSREdpR01YRFFyenJVa2pWY3M5enZRdG80ck80R2RYLzBNUGZabzdaQjFsdnMKcGRDcm5iS2V4YXlPS2VwZEtYclJBb0lCQVFEQnVmSUllZi9EZ0dGbnJWeGhNZEl2UEM0UUVlOXEydjdoUTRWNQphSkQyTGVqRTA4Zy9MbFN6eU5Fd1I3dFFQdG5hdFNkSEttSXVzSXlFVXJmanVyREplOExoWVRobCtCSG1Sa2hwClJ5T1lYYTV5cnZNUk5oSkYweVhFK1MwVko5RmJ2OE5iTHNmdTBRMnlRM0VrdjBBR2c0VmQ3cm9kT2dFMlVRTTgKM1Fib2xWSk1LVkFMWUdRVTJZVWdtUjR3TnRoNVhmSG5uVWZWZnJMTDhER214cU8xY29tbWo1eFdvc2JmSkVCZQpTL0ZtMFdEZE1iZmxHNWp0K0JuZGw5YmpobS8vR1pUaGVmSThQeHdVVVVjekxqbUE2bTd1QUxiSmFwZUl5MWR6CmkyeUZHalV0a2E3aWVBRlBGcjhpZ3E1OCtBL3pEUXJudHZHZmhvbnVZSytyMm0zUkFvSUJBUURyOTZzZTFhNXEKOVlSMU5OaEQvSnRSdVFkMzhnTkU5Yi9US3FWeXcwYlh3WGd0aGxvRzRQa1lPM2R1SjR1ZGJMSGloamlVWko0Zwo4UzJMWFloZjFQZkZJc3k0azAwdGZkVlJhSnVLclpoUUxyMjBSSW9ZclJ6NEpGT1hnV1dMVlNsYkpXc1ZJZTc1CmNiN296eDdjZGo3M1ZJVk1oZlUxVHRRNDhEY1ovNUMwand3QzdiNU9aVmRieERUR21XOVV6WVJJWUptMERpTVEKODFKS3VBTFdBM093dmR3blZPamJGamtBR0pRVWxvNkhacEFKbVFiOUczdjJvZHRMbUVWdzVKcTVkYkZ4bW9OMQpUdVBvVkIwV0JIM0ZDSEIrbzNTbEt1czJ3RWRKQzQ3SWY5MGJnZ05zMzdyMnRLaUEwSllvdWFyN2s0K3Bmb0pDCiswQXVybDBnZWVYM0FvSUJBQkRFOE0rTkIvZTdZRE1pVFpIWVJ6SnhpaWMzOWpxUXRHbDVkODlYbnR6QWdwcXYKSG5GaHFGRmJ1OGZySGFySGxnSVpsa25Sd0dmOFBsMmg3MnNXR1FHSDVnbXVhYnhoNmVLK0NMeWNQTmVPbkhBdQo5cmx4cmNrL2l0QnZKVmprZG5uenNveHRFejkzOXpDTUovb2ZXQUo0Vmc0WWdTSFFpSlJVRk95cTBWTkd3Ylg0CkNZYkNsRWM5d1FsZVY3K2lyOEJwd212ak1IbXBtdjZPVHkzNW1lZzEvdlpkRGhKdHlyczhIeHBLaHAzNDErS3QKMEJaVCtqdjNNdjM4aTh4c01idXFVam5tWFhLYm4rbWVVNFI1cHQ2aTdIRkx2SWJZNUQ4aUl2TE1pZHRIdG55NApWS1FqOFRFUWJnRWx3TWJ1amlyaTRTUEhzVWkyMDYrL3pOVWFkbEVDZ2dFQVQ5YUdnWEpQQjBWMndhbFZtdnMzCkdobCttMmk1RG5ZUHAwVUFvYW5NcUdkL1gvZmJNZ1NnZzBCcmtrdXpBMXFwZlRsb24xekQyK1YrUVc4dUd5NlYKZm8vZHNIMjJXVFFBSHdGRHoxSVkvTmd6dTNDTlFQZ2hteHUvWkwyVk8rVmVqc1pFU3V3bVRTUGRNaXdTQnduRApHQy95d2dkNUJjWmNLNytyQlJMaFJSWTVQQ3h3ZlZud2lzNENCVWdZMFJxUUxXVHgzR2dFR1ZJYWY4bHV2RGFDCnlFUVUzd0h0bjJNUGVpYld3M2lGVk82d3dXNlRYTVFWMTBiQVNmMkZVVU1uenFReG0zeHFDaURkSDlpRjF0TkMKTW80K2NicWdWdG9FcDR1N0VzM2tTNVpubTAwTUY2UkRRQUcwR1pGNW5PSGxKaVd4TCtucjdQblJwM203YktlUgoxUUtDQVFBM1hKMXpSRnpKTnM5b1ptanU2RzVmeHIxME9JdlA1bVcwVm1hVEQzeE1qTnRBZ0dzVHo4TnNab1hpCmlFQXZ0ZTVpQXowUldyUkNoVjNMaWRveGFUQjhnSnZqNVkzZHZjTThBMHFjWEw3QjQzeDhuaG9RUjdxWkNtREoKdXZpR0hKSzlIU05OZnlFWk1TQzNIa1M3T216dWFIWXBTWUhzdndYK2MySTZ3VzMwb0pydkNSQ1BYMERFWCtVVgpsL2FxbFBGZlR3RU1COXBKaE9FSTdyU3crSHNmaGJoaFRKV043UkFCemJuSkZpMFhSdVFpSUljd1ZzY0hlbWpmCkFsUDF3OThjT1NJMmdzZzU3OFlQWHdrZnlvekJJeWVVZFNQWEFBdmozZlF1bnhPNUdORzhWUlo5UlBGU2ZEOU8KdVhTbzllVEJ5NFZIblM5SjJiZkpCV1FQd3F0cQotLS0tLUVORCBQUklWQVRFIEtFWS0tLS0tCg=="
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user