Skip to content

Ansible - Proxmox Server bearbeiten

Ansible
1 1 535
  • Aktuell habe ich durch eine Erkrankung etwas mehr Zeit für die Konsole, sodass ich angefangen habe, die Setups aller meiner Server zu vereinheitlichen. Anfangen wollte ich dazu mit meinem lokalen Proxmox. Dabei kam mir wieder in den Sinn, das ich auch noch ein Debian Bookworm 12 Template brauchte.

    Also, das aktuelle Debian Image heruntergeladen. Mit diesem dann einen Debian Bookworm 12 Server aufgesetzt. Jetzt brauchte ich zu diesem Zeitpunkt einen Zugang mit SSH-Key (für mein Semaphore).

    Also habe ich schon mal zwei SSH-Keys eingefügt. Einmal meinen Haupt-PC und einmal den Semaphore Server. Danach den Server in ein Template umgewandelt.

    c552d99f-4aa1-480a-a4a1-c9ac711f24fb-grafik.png

    105 ist das Template, 106 ein damit erstellter Test-Server. Ok, das läuft wie erwartet, jetzt möchte ich den Server durch konfigurieren, so wie ich das gerne haben möchte. Da es hier um Ansible geht, brauche ich dazu ein Playbook.

    ---
    ###############################################
    # Playbook for my Proxmox VMs
    ###############################################
    - name: My task
      hosts: proxmox_test
      tasks:
    
        #####################
        # Update && Upgrade installed packages and install a set of base software
        #####################
        - name: Update apt package cache.
          ansible.builtin.apt:
            update_cache: yes
            cache_valid_time: 600
    
        - name: Upgrade installed apt packages.
          ansible.builtin.apt:
            upgrade: 'yes'
    
        - name: Ensure that a base set of software packages are installed.
          ansible.builtin.apt:
            pkg:
             - crowdsec
             - crowdsec-firewall-bouncer
             - duf
             - htop
             - needrestart
             - psmisc
             - python3-openssl
             - ufw
            state: latest
    
        #####################
        # Setup UFW
        #####################
        - name: Enable UFW
          community.general.ufw:
            state: enabled
    
        - name: Set policy IN
          community.general.ufw:
            direction: incoming
            policy: deny
    
        - name: Set policy OUT
          community.general.ufw:
            direction: outgoing
            policy: allow
    
        - name: Set logging
          community.general.ufw:
            logging: 'on'
    
        - name: Allow OpenSSH rule
          community.general.ufw:
            rule: allow
            name: OpenSSH
    
        - name: Allow HTTP rule
          community.general.ufw:
            rule: allow
            port: 80
            proto: tcp
    
        - name: Allow HTTPS rule
          community.general.ufw:
            rule: allow
            port: 443
            proto: tcp
    
        #####################
        # Setup CrowdSEC
        #####################
        - name: Add one line to crowdsec config.yaml
          ansible.builtin.lineinfile:
            path: /etc/crowdsec/config.yaml
            #search_string: '<FilesMatch ".php[45]?$">'
            insertafter: '^db_config:'
            line: '  use_wal: true'
    
        #####################
        # Generate Self-Signed SSL Certificate
        # for this we need python3-openssl on the client
        #####################
        - name: Create a new directory www at given path
          ansible.builtin.file:
            path: /etc/ssl/self-signed_ssl/
            state: directory
            mode: '0755'
    
        - name: Create private key (RSA, 4096 bits)
          community.crypto.openssl_privatekey:
            path: /etc/ssl/self-signed_ssl/privkey.pem
    
        - name: Create simple self-signed certificate
          community.crypto.x509_certificate:
            path: /etc/ssl/self-signed_ssl/fullchain.pem
            privatekey_path: /etc/ssl/self-signed_ssl/privkey.pem
            provider: selfsigned
    
        - name: Check if the private key exists
          stat:
            path: /etc/ssl/self-signed_ssl/privkey.pem
          register: privkey_stat
    
        - name: Renew self-signed certificate
          community.crypto.x509_certificate:
            path: /etc/ssl/self-signed_ssl/fullchain.pem
            privatekey_path: /etc/ssl/self-signed_ssl/privkey.pem
            provider: selfsigned
          when: privkey_stat.stat.exists and privkey_stat.stat.size > 0
    
        #####################
        # Check for new kernel and reboot
        #####################
        - name: Check if a new kernel is available
          ansible.builtin.command: needrestart -k -p > /dev/null; echo $?
          register: result
          ignore_errors: yes
    
        - name: Restart the server if new kernel is available
          ansible.builtin.command: reboot
          when: result.rc == 2
          async: 1
          poll: 0
    
        - name: Wait for the reboot and reconnect
          wait_for:
            port: 22
            host: '{{ (ansible_ssh_host|default(ansible_host))|default(inventory_hostname) }}'
            search_regex: OpenSSH
            delay: 10
            timeout: 60
          connection: local
    
        - name: Check the Uptime of the servers
          shell: "uptime"
          register: Uptime
    
        - debug: var=Uptime.stdout
    

    In dem Inventory muss der Server drin sein, den man bearbeiten möchte. Also, so was

    [proxmox_test]
    192.168.3.19 # BookwormTEST
    

    Playbook

    • Wir aktualisieren alle Pakete
    • Wir installieren, von mir festgelegte Pakete
    • Konfiguration UFW
    • Konfiguration CrowdSec
    • Konfiguration selbst signiertes Zertifikat
    • Kontrolle ob neuer Kernel vorhanden ist, wenn ja Reboot
    • Uptime - Kontrolle ob Server erfolgreich gestartet ist

    Danach ist der Server so, wie ich ihn gerne hätte.

    • Aktuell
    • ufw - Port 22, 80 und 443 auf (SSH, HTTP & HTTPS)
    • CrowdSec als fail2ban Ersatz
    • Selbst signierte Zertifikate benutze ich nur für lokale Server

    Die erfolgreiche Ausgabe in Semaphore, sieht so aus.

     12:38:16 PM
    Task 384 added to queue
    12:38:21 PM
    Preparing: 384
    12:38:21 PM
    Prepare TaskRunner with template: Proxmox configure Proxmox Template
    12:38:22 PM
    Von https://gitlab.com/Bullet64/playbook
    12:38:22 PM
    e7c8531..c547cfc master -> origin/master
    12:38:22 PM
    Updating Repository https://gitlab.com/Bullet64/playbook.git
    12:38:23 PM
    Von https://gitlab.com/Bullet64/playbook
    12:38:23 PM
    * branch master -> FETCH_HEAD
    12:38:23 PM
    Aktualisiere e7c8531..c547cfc
    12:38:23 PM
    Fast-forward
    12:38:23 PM
    proxmox_template_configuration.yml | 5 +++++
    12:38:23 PM
    1 file changed, 5 insertions(+)
    12:38:23 PM
    Get current commit hash
    12:38:23 PM
    Get current commit message
    12:38:23 PM
    installing static inventory
    12:38:23 PM
    No collections/requirements.yml file found. Skip galaxy install process.
    12:38:23 PM
    No roles/requirements.yml file found. Skip galaxy install process.
    12:38:26 PM
    Started: 384
    12:38:26 PM
    Run TaskRunner with template: Proxmox configure Proxmox Template
    12:38:26 PM
    12:38:26 PM
    PLAY [My task] *****************************************************************
    12:38:26 PM
    12:38:26 PM
    TASK [Gathering Facts] *********************************************************
    12:38:28 PM
    ok: [192.168.3.19]
    12:38:28 PM
    12:38:28 PM
    TASK [Update apt package cache.] ***********************************************
    12:38:29 PM
    ok: [192.168.3.19]
    12:38:29 PM
    12:38:29 PM
    TASK [Upgrade installed apt packages.] *****************************************
    12:38:30 PM
    ok: [192.168.3.19]
    12:38:30 PM
    12:38:30 PM
    TASK [Ensure that a base set of software packages are installed.] **************
    12:38:31 PM
    ok: [192.168.3.19]
    12:38:31 PM
    12:38:31 PM
    TASK [Enable UFW] **************************************************************
    12:38:32 PM
    ok: [192.168.3.19]
    12:38:32 PM
    12:38:32 PM
    TASK [Set policy IN] ***********************************************************
    12:38:34 PM
    ok: [192.168.3.19]
    12:38:34 PM
    12:38:34 PM
    TASK [Set policy OUT] **********************************************************
    12:38:36 PM
    ok: [192.168.3.19]
    12:38:36 PM
    12:38:36 PM
    TASK [Set logging] *************************************************************
    12:38:37 PM
    ok: [192.168.3.19]
    12:38:37 PM
    12:38:37 PM
    TASK [Allow OpenSSH rule] ******************************************************
    12:38:37 PM
    ok: [192.168.3.19]
    12:38:37 PM
    12:38:37 PM
    TASK [Allow HTTP rule] *********************************************************
    12:38:38 PM
    ok: [192.168.3.19]
    12:38:38 PM
    12:38:38 PM
    TASK [Allow HTTPS rule] ********************************************************
    12:38:38 PM
    ok: [192.168.3.19]
    12:38:38 PM
    12:38:38 PM
    TASK [Add one line to crowdsec config.yaml] ************************************
    12:38:39 PM
    ok: [192.168.3.19]
    12:38:39 PM
    12:38:39 PM
    TASK [Create a new directory www at given path] ********************************
    12:38:39 PM
    ok: [192.168.3.19]
    12:38:39 PM
    12:38:39 PM
    TASK [Create private key (RSA, 4096 bits)] *************************************
    12:38:41 PM
    ok: [192.168.3.19]
    12:38:41 PM
    12:38:41 PM
    TASK [Create simple self-signed certificate] ***********************************
    12:38:43 PM
    ok: [192.168.3.19]
    12:38:43 PM
    12:38:43 PM
    TASK [Check if the private key exists] *****************************************
    12:38:43 PM
    ok: [192.168.3.19]
    12:38:43 PM
    12:38:43 PM
    TASK [Renew self-signed certificate] *******************************************
    12:38:44 PM
    ok: [192.168.3.19]
    12:38:44 PM
    12:38:44 PM
    TASK [Check if a new kernel is available] **************************************
    12:38:44 PM
    changed: [192.168.3.19]
    12:38:44 PM
    12:38:44 PM
    TASK [Restart the server if new kernel is available] ***************************
    12:38:44 PM
    skipping: [192.168.3.19]
    12:38:44 PM
    12:38:44 PM
    TASK [Wait for the reboot and reconnect] ***************************************
    12:38:55 PM
    ok: [192.168.3.19]
    12:38:55 PM
    12:38:55 PM
    TASK [Check the Uptime of the servers] *****************************************
    12:38:55 PM
    changed: [192.168.3.19]
    12:38:55 PM
    12:38:55 PM
    TASK [debug] *******************************************************************
    12:38:55 PM
    ok: [192.168.3.19] => {
    12:38:55 PM
    "Uptime.stdout": " 12:38:55 up 19 min, 2 users, load average: 0,84, 0,29, 0,10"
    12:38:55 PM
    }
    12:38:55 PM
    12:38:55 PM
    PLAY RECAP *********************************************************************
    12:38:55 PM
    192.168.3.19 : ok=21 changed=2 unreachable=0 failed=0 skipped=1 rescued=0 ignored=0
    12:38:55 PM
    
  • NodeBB - ActivityPub

    NodeBB nodebb linux fediverse
    1
    0 Stimmen
    1 Beiträge
    136 Aufrufe
    Niemand hat geantwortet
  • Update 1.31.0 released

    Vaultwarden vaultwarden linux
    1
    1
    0 Stimmen
    1 Beiträge
    225 Aufrufe
    Niemand hat geantwortet
  • Semaphore - Installation & Anwendung

    Verschoben Ansible semaphore ansible linux
    4
    9
    0 Stimmen
    4 Beiträge
    2k Aufrufe
    FrankMF
    Ich parke das mal hier, damit ich das nicht noch mal vergesse. Hat mich eben mal wieder eine Stunde gekostet /etc/ansible/ansible.cfg [defaults] host_key_checking = False Edit -> https://linux-nerds.org/topic/1493/ansible-host_key_checking
  • 10G

    Linux 10g linux
    2
    0 Stimmen
    2 Beiträge
    228 Aufrufe
    FrankMF
    Bedingt durch ein paar Probleme mit der Forensoftware, habe ich einen kleinen Datenverlust erlitten. Dazu gehören auch hier einige Beiträge. Dann versuche ich das mal zu rekonstruieren. Oben hatten wir das SFP+ Modul ja getestet. Als nächsten Schritt habe ich die ASUS XG-C100F 10G SFP+ Netzwerkkarte in meinen Hauptrechner verbaut. [image: 1635752117002-20211028_162455_ergebnis.jpg] Die Verbindung zum Zyxel Switch erfolgt mit einem DAC-Kabel. Im Video zum Zyxel Switch wurde schön erklärt, das die DAC Verbindung stromsparender als RJ45 Adapter sind. Somit fiel die Wahl auf die DAC Verbindungen. Hier nochmal das Video. https://www.youtube.com/watch?v=59I-RlliRms So sieht so ein DAC Verbindungskabel aus. Die SFP+ Adapter sind direkt daran montiert. [image: 1635752308951-20211028_170118_ergebnis.jpg] ethtool root@frank-MS-7C37:/home/frank# ethtool enp35s0 Settings for enp35s0: Supported ports: [ FIBRE ] Supported link modes: 100baseT/Full 1000baseT/Full 10000baseT/Full 2500baseT/Full 5000baseT/Full Supported pause frame use: Symmetric Receive-only Supports auto-negotiation: Yes Supported FEC modes: Not reported Advertised link modes: 100baseT/Full 1000baseT/Full 10000baseT/Full 2500baseT/Full 5000baseT/Full Advertised pause frame use: Symmetric Advertised auto-negotiation: Yes Advertised FEC modes: Not reported Speed: 10000Mb/s Duplex: Full Port: FIBRE PHYAD: 0 Transceiver: internal Auto-negotiation: on Supports Wake-on: pg Wake-on: g Current message level: 0x00000005 (5) drv link Link detected: yes iperf3 ----------------------------------------------------------- Server listening on 5201 ----------------------------------------------------------- Accepted connection from 192.168.3.207, port 44570 [ 5] local 192.168.3.213 port 5201 connected to 192.168.3.207 port 44572 [ ID] Interval Transfer Bitrate Retr Cwnd [ 5] 0.00-1.00 sec 1.10 GBytes 9.43 Gbits/sec 46 1.59 MBytes [ 5] 1.00-2.00 sec 1.10 GBytes 9.42 Gbits/sec 0 1.60 MBytes [ 5] 2.00-3.00 sec 1.10 GBytes 9.42 Gbits/sec 3 1.60 MBytes [ 5] 3.00-4.00 sec 1.10 GBytes 9.42 Gbits/sec 0 1.60 MBytes [ 5] 4.00-5.00 sec 1.10 GBytes 9.42 Gbits/sec 0 1.61 MBytes [ 5] 5.00-6.00 sec 1.10 GBytes 9.42 Gbits/sec 0 1.63 MBytes [ 5] 6.00-7.00 sec 1.10 GBytes 9.42 Gbits/sec 0 1.63 MBytes [ 5] 7.00-8.00 sec 1.09 GBytes 9.41 Gbits/sec 0 1.68 MBytes [ 5] 8.00-9.00 sec 1.10 GBytes 9.42 Gbits/sec 0 1.68 MBytes [ 5] 9.00-10.00 sec 1.10 GBytes 9.42 Gbits/sec 0 1.68 MBytes [ 5] 10.00-10.02 sec 22.5 MBytes 9.45 Gbits/sec 0 1.68 MBytes - - - - - - - - - - - - - - - - - - - - - - - - - [ ID] Interval Transfer Bitrate Retr [ 5] 0.00-10.02 sec 11.0 GBytes 9.42 Gbits/sec 49 sender
  • ZFS - Wichtige Befehle

    Linux zfs linux
    3
    0 Stimmen
    3 Beiträge
    1k Aufrufe
    FrankMF
    Heute mal drüber gestolpert, das es auch so was geben kann. root@pve2:~# zpool status pool: pool_NAS state: ONLINE status: Some supported and requested features are not enabled on the pool. The pool can still be used, but some features are unavailable. action: Enable all features using 'zpool upgrade'. Once this is done, the pool may no longer be accessible by software that does not support the features. See zpool-features(7) for details. scan: scrub repaired 0B in 00:20:50 with 0 errors on Sun Apr 13 00:44:51 2025 config: NAME STATE READ WRITE CKSUM pool_NAS ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 ata-WDC_WDS100T1R0A-68A4W0_230520800733 ONLINE 0 0 0 ata-WDC_WDS100T1R0A-68A4W0_230520801376 ONLINE 0 0 0 errors: No known data errors Was machen? Als erstes mal ein Backup angestoßen. Danach root@pve2:~# zpool get all pool_NAS | grep feature pool_NAS feature@async_destroy enabled local pool_NAS feature@empty_bpobj active local pool_NAS feature@lz4_compress active local pool_NAS feature@multi_vdev_crash_dump enabled local pool_NAS feature@spacemap_histogram active local pool_NAS feature@enabled_txg active local pool_NAS feature@hole_birth active local pool_NAS feature@extensible_dataset active local pool_NAS feature@embedded_data active local pool_NAS feature@bookmarks enabled local pool_NAS feature@filesystem_limits enabled local pool_NAS feature@large_blocks enabled local pool_NAS feature@large_dnode enabled local pool_NAS feature@sha512 enabled local pool_NAS feature@skein enabled local pool_NAS feature@edonr enabled local pool_NAS feature@userobj_accounting active local pool_NAS feature@encryption enabled local pool_NAS feature@project_quota active local pool_NAS feature@device_removal enabled local pool_NAS feature@obsolete_counts enabled local pool_NAS feature@zpool_checkpoint enabled local pool_NAS feature@spacemap_v2 active local pool_NAS feature@allocation_classes enabled local pool_NAS feature@resilver_defer enabled local pool_NAS feature@bookmark_v2 enabled local pool_NAS feature@redaction_bookmarks enabled local pool_NAS feature@redacted_datasets enabled local pool_NAS feature@bookmark_written enabled local pool_NAS feature@log_spacemap active local pool_NAS feature@livelist enabled local pool_NAS feature@device_rebuild enabled local pool_NAS feature@zstd_compress enabled local pool_NAS feature@draid enabled local pool_NAS feature@zilsaxattr disabled local pool_NAS feature@head_errlog disabled local pool_NAS feature@blake3 disabled local pool_NAS feature@block_cloning disabled local pool_NAS feature@vdev_zaps_v2 disabled local Das kommt von neuen Funktionen, die zu ZFS hinzugefügt wurden und bei Erstellung des Pools nicht vorhanden waren. Dann upgraden wir mal root@pve2:~# zpool upgrade pool_NAS This system supports ZFS pool feature flags. Enabled the following features on 'pool_NAS': zilsaxattr head_errlog blake3 block_cloning vdev_zaps_v2 Kontrolle root@pve2:~# zpool status pool: pool_NAS state: ONLINE scan: scrub repaired 0B in 00:20:50 with 0 errors on Sun Apr 13 00:44:51 2025 config: NAME STATE READ WRITE CKSUM pool_NAS ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 ata-WDC_WDS100T1R0A-68A4W0_230520800733 ONLINE 0 0 0 ata-WDC_WDS100T1R0A-68A4W0_230520801376 ONLINE 0 0 0 errors: No known data errors Features kontrollieren root@pve2:~# zpool get all pool_NAS | grep feature pool_NAS feature@async_destroy enabled local pool_NAS feature@empty_bpobj active local pool_NAS feature@lz4_compress active local pool_NAS feature@multi_vdev_crash_dump enabled local pool_NAS feature@spacemap_histogram active local pool_NAS feature@enabled_txg active local pool_NAS feature@hole_birth active local pool_NAS feature@extensible_dataset active local pool_NAS feature@embedded_data active local pool_NAS feature@bookmarks enabled local pool_NAS feature@filesystem_limits enabled local pool_NAS feature@large_blocks enabled local pool_NAS feature@large_dnode enabled local pool_NAS feature@sha512 enabled local pool_NAS feature@skein enabled local pool_NAS feature@edonr enabled local pool_NAS feature@userobj_accounting active local pool_NAS feature@encryption enabled local pool_NAS feature@project_quota active local pool_NAS feature@device_removal enabled local pool_NAS feature@obsolete_counts enabled local pool_NAS feature@zpool_checkpoint enabled local pool_NAS feature@spacemap_v2 active local pool_NAS feature@allocation_classes enabled local pool_NAS feature@resilver_defer enabled local pool_NAS feature@bookmark_v2 enabled local pool_NAS feature@redaction_bookmarks enabled local pool_NAS feature@redacted_datasets enabled local pool_NAS feature@bookmark_written enabled local pool_NAS feature@log_spacemap active local pool_NAS feature@livelist enabled local pool_NAS feature@device_rebuild enabled local pool_NAS feature@zstd_compress enabled local pool_NAS feature@draid enabled local pool_NAS feature@zilsaxattr enabled local pool_NAS feature@head_errlog active local pool_NAS feature@blake3 enabled local pool_NAS feature@block_cloning enabled local pool_NAS feature@vdev_zaps_v2 enabled local So, alle neuen Features aktiviert. Jetzt kann der Pool weiterhin seine Arbeit machen.
  • Mainline 5.12.x

    Images linux rockpro64
    2
    0 Stimmen
    2 Beiträge
    403 Aufrufe
    FrankMF
    5.12.0-1149-ayufan released ayufan: defconfig: add MT76x* drivers
  • NanoPi R4S - Armbian

    NanoPi R4S armbian linux nanopir4s
    2
    0 Stimmen
    2 Beiträge
    434 Aufrufe
    FrankMF
    Müsste seit gestern so weit sein [image: 1612720791407-b20f93d2-1719-40c5-afb8-6b4edafa6793-image.png]
  • ROCKPro64 - PCIe NVMe SSD installieren

    Hardware linux rockpro64
    1
    0 Stimmen
    1 Beiträge
    365 Aufrufe
    Niemand hat geantwortet