Wazuh is a security monitoring platform. It monitors for:
- Security-related system events.
- Known vulnerabilities (CVEs) in versions of installed software.
- Misconfigurations in system security.
- Create an infrastructure VM for the Wazuh manager, and add it to the wazuh-manager group
- Configure the infrastructure VM with kayobe:
kayobe infra vm host configure - Edit your config under
$KAYOBE_CONFIG_PATH/inventory/group_vars/wazuh-manager/wazuh-manager, in particular the defaults assume that theprovision_oc_netnetwork will be used. - Generate secrets:
kayobe playbook run $KAYOBE_CONFIG_PATH/ansible/wazuh-secrets.yml - Deploy the Wazuh manager:
kayobe playbook run $KAYOBE_CONFIG_PATH/ansible/wazuh-manager.yml - Deploy the Wazuh agents:
kayobe playbook run $KAYOBE_CONFIG_PATH/ansible/wazuh-agent.yml
Provisioning an infra VM for Wazuh Manager.
Kayobe supports :kayobe-doc:`provisioning infra VMs <deployment.html#infrastructure-vms>`. The following configuration may be used as a guide. Config for infra VMs is documented :kayobe-doc:`here <configuration/reference/infra-vms>`.
Add a Wazuh Manager host to the wazuh-manager group in $KAYOBE_CONFIG_PATH/inventory/hosts.
[wazuh-manager]
os-wazuhAdd the wazuh-manager group to the infra-vms group in $KAYOBE_CONFIG_PATH/inventory/groups.
[wazuh-manager]
[infra-vms:children]
wazuh-managerDefine VM sizing in $KAYOBE_CONFIG_PATH/inventory/group_vars/wazuh-manager/infra-vms:
---
# Memory in MB.
infra_vm_memory_mb: 16384
# Number of vCPUs.
infra_vm_vcpus: 8
# Capacity of the infra VM data volume.
infra_vm_data_capacity: "200G"Optional: define LVM volumes in $KAYOBE_CONFIG_PATH/inventory/group_vars/wazuh-manager/lvm.
/var/ossec often requires greater storage space, and /var/lib/wazuh-indexer
may be beneficial too.
# List of infra VM volume groups. See mrlesmithjr.manage-lvm role for
# format.
infra_vm_lvm_groups:
- vgname: "data"
disks:
- "/dev/vdb"
create: true
lvnames:
- lvname: "data"
size: "100%VG"
filesystem: "ext4"
mount: true
mntp: "/var/ossec"
create: trueDefine network interfaces $KAYOBE_CONFIG_PATH/inventory/group_vars/wazuh-manager/network-interfaces:
(The following is an example - the names will depend on your particular network configuration.)
---
# Overcloud provisioning network IP information.
provision_oc_net_interface: "ens3"The Wazuh manager may need to be exposed externally, in which case it may require another interface.
This can be done as follows in $KAYOBE_CONFIG_PATH/inventory/group_vars/wazuh-manager/network-interfaces,
with the network defined in networks.yml as usual.
infra_vm_extra_network_interfaces:
- "extra_net"
# External network connectivity on ens2
extra_net_interface: "ens2"Follow the Kayobe instructions to :kayobe-doc:`provision the VM <deployment.html#infrastructure-vms>` and configure the host.
Your wazuh-manager VM needs to have network connection with servers which will have the wazuh-agent installed, preferably it should be in the provision_oc_net.
Several services are used for the communication of Wazuh components. Below is the list of default ports used by these services.
| Component | Port | Protocol | Purpose |
|---|---|---|---|
| Wazuh server | 1514 | TCP (default) | Agent connection service |
| 1514 | UDP (optional) | Agent connection service (disabled by default) | |
| 1515 | TCP | Agent enrollment service | |
| 1516 | TCP | Wazuh cluster daemon | |
| 514 | UDP (default) | Wazuh Syslog collector (disabled by default) | |
| 514 | TCP (optional) | Wazuh Syslog collector (disabled by default) | |
| 55000 | TCP | Wazuh server RESTful API | |
| Wazuh indexer | 9200 | TCP | Wazuh indexer RESTful API |
| 9300-9400 | TCP | Wazuh indexer cluster communication | |
| Wazuh dashboard | 443 | TCP | Wazuh web user interface |
In cases where you can’t use infra-vms to deploy your wazuh-manager VM but you want to configure the host using kayobe, here are some tips (note that depending on your setup this doesn't have to always apply):
- Depending on preferences, hosts have to be part of some group in inventory.
infra-vmsgroup still seems like the best choice. You can usekayobe infra vm host configureto configure host in this case. The tips below are based on the assumption that infra-vm will be used. - user
stackwith passwordless sudo and access with ssh keys needs to be present on the host. It can be achieved in many different ways, depending on your setup. - lvm configuration should be placed in
host_vars/<host_name> - wazuh-manager hosts have to be part of
infra-vmsgroup (directly or as child) - The network used on the host needs to be defined in
networks.ymland if you have pre-alocated an IP, it should be added tonetwork-allocation.yml. For example, if using host with IP 10.10.224.5 in network 10.10.224.0/24 one have to add:
networks.yml:
provision_oc_net_cidr: 10.10.224.0/24
provision_oc_net_allocation_pool_start: 10.10.224.3
provision_oc_net_allocation_pool_end: 10.10.224.200
provision_oc_net_gateway: 10.10.224.254network-allocation.yml:
provision_oc_net_ips:
nesmetprd01: 10.10.224.5To install a specific version modify the wazuh-ansible entry in $KAYOBE_CONFIG_PATH/ansible/requirements.yml:
roles:
- name: wazuh-ansible
src: https://github.com/stackhpc/wazuh-ansible
version: custom-branchThe default version has been tested and verified, but there is no reason not to use a different one.
Reinstall the role if required:
kayobe control host bootstrap
Edit the playbook and variables to your needs:
Wazuh manager playbook is located in $KAYOBE_CONFIG_PATH/ansible/wazuh-manager.yml.
Running this playbook will:
- generate certificates for wazuh-manager
- setup and deploy filebeat on wazuh-manager vm
- setup and deploy wazuh-indexer on wazuh-manager vm
- setup and deploy wazuh-manager on wazuh-manager vm
- setup and deploy wazuh-dashboard on wazuh-manager vm
- copy certificates over to wazuh-manager vm
Wazuh manager variables file is located in $KAYOBE_CONFIG_PATH/inventory/group_vars/wazuh-manager/wazuh-manager.
You may need to modify some of the variables, including:
- wazuh_manager_ip
Note
If you are using multiple environments, and you need to customise Wazuh in
each environment, create override files in an appropriate directory,
for example $KAYOBE_CONFIG_PATH/environments/production/inventory/group_vars/.
Files which values can be overridden (in the context of Wazuh):
- $KAYOBE_CONFIG_PATH/inventory/group_vars/wazuh/wazuh-manager/wazuh-manager
- $KAYOBE_CONFIG_PATH/wazuh-manager.yml
- $KAYOBE_CONFIG_PATH/inventory/group_vars/wazuh/wazuh-agent/wazuh-agent
You'll need to run wazuh-manager.yml playbook again to apply customisation.
Wazuh requires that secrets or passwords are set for itself and the services with which it communiticates.
Wazuh secrets playbook is located in $KAYOBE_CONFIG_PATH/ansible/wazuh-secrets.yml.
Running this playbook will generate and put pertinent security items into secrets
vault file which will be placed in $KAYOBE_CONFIG_PATH/wazuh-secrets.yml.
If using environments it ends up in $KAYOBE_CONFIG_PATH/environments/<env_name>/wazuh-secrets.yml
Remember to encrypt!
Wazuh secrets template is located in $KAYOBE_CONFIG_PATH/ansible/templates/wazuh-secrets.yml.j2.
It will be used by wazuh secrets playbook to generate wazuh secrets vault file.
kayobe playbook run $KAYOBE_CONFIG_PATH/ansible/wazuh-secrets.ymlNote
Use ansible-vault to view the secrets:
ansible-vault view --vault-password-file ~/vault.password $KAYOBE_CONFIG_PATH/inventory/group_vars/wazuh-manager/wazuh-secrets.yml
It is common to want to configure the Wazuh dashboard to serve on a different network than the overcloud provisioning network used for Wazuh's internal communication.
In order to do so, either create or edit the $KAYOBE_CONFIG_PATH/environments/<env_name>/inventory/group_vars/wazuh-manager/wazuh-manager.yml configuration file to include the dashboard variable:
dashboard_server_host: "{{ <network-name-prefix>_net_name | net_ip }}"For example:
dashboard_server_host: "{{ public_net_name | net_ip }}"If this is being added post deployment the user will be required to re-run the wazuh-manager.yml ansible playbook via the following command:
kayobe playbook run $KAYOBE_CONFIG_PATH/ansible/wazuh-manager.ymlMore on the deployment of Wazuh dashboard can be found below in the :ref:`subsequent section <Deploy>`.
You can generate your own TLS certificates, otherwise skip this section.
By default, Wazuh Ansible uses wazuh-cert-tool.sh
to automatically
generate certificates for wazuh-indexer (previously Elasticsearch and opendistro)
and wazuh-dashbooard (previously Kibana) using a local CA.
If the certificates directory {{ kayobe_env_config_path }}/wazuh
does not exist, it will generate the following certificates in {{ kayobe_env_config_path }}/wazuh/wazuh-certificates/
(here os-wazuh is set as elasticsearch_node_name and kibana_node_name:
- Admin certificate for opendistro security
- admin-key.pem, admin.pem
- Node certificate
- os-wazuh-key.pem, os-wazuh.pem
- HTTP certificate for wazuh-dashboard (port 5601) & wazuh-indexer (port 9200)
- os-wazuh_http.key, os-wazuh_http.pem
- Root CA certificate
- root-ca.key root-ca.pem
It is also possible to use externally generated certificates for
wazuh-dashboard. Customise the dashboard_node_name variable so that you can
use a separate certificate and key for this service e.g:
dashboard_node_name: "wazuh-dashboard"You will need to create two files matching the following pattern:
{{ dashboard_node_name }}-key.pemfor the private key{{ dashboard_node_name }}.pemfor the certificate
Drop these files into $KAYOBE_CONFIG_PATH/environments/<environment>/wazuh/wazuh-certificates/ if
using the kayobe environments feature, or $KAYOBE_CONFIG_PATH/wazuh/wazuh-certificates/ if not.
The key for the external certificate should be in PKCS#8 format (in its header it may have BEGIN
PRIVATE KEY instead of BEGIN RSA PRIVATE KEY or BEGIN OPENSSH PRIVATE KEY).
Example OpenSSL rune to convert to PKCS#8:
openssl pkcs8 -topk8 -nocrypt -in wazuh.key -out wazuh.key.pkcs8
TODO: document how to use a local certificate. Do we need to override all certificates?
Wazuh ships with a large selection of Security Configuration Assessment rulesets. However, you may find you want to add more. This can be achieved via custom policies.
SKC supports this automatically, just add the policy file from this PR to
{{ kayobe_env_config_path }}/wazuh/custom_sca_policies.
Currently, Wazuh does not ship with a CIS benchmark for Rocky 9. You can find
the in-development policy here: wazuh/wazuh#17810 To
include this in your deployment, simply copy it to
{{ kayobe_env_config_path }}/wazuh/custom_sca_policies/cis_rocky_linux_9.yml.
Deploy Wazuh manager:
kayobe playbook run $KAYOBE_CONFIG_PATH/ansible/wazuh-manager.yml
If you are using the wazuh generated certificates, this will result in the creation of some certificates and keys (in case of custom certs adjust path to it). Encrypt the keys (and remember to commit to git):
ansible-vault encrypt --vault-password-file ~/vault.pass $KAYOBE_CONFIG_PATH/environments/<environment>/wazuh/wazuh-certificates/*.key
If using the kayobe environments feature, otherwise:
ansible-vault encrypt --vault-password-file ~/vault.pass $KAYOBE_CONFIG_PATH/ansible/wazuh/certificates/certs/*.key
The Wazuh portal should be accessible on port 443 of the Wazuh
manager’s IPs (using HTTPS, with the root CA cert in $KAYOBE_CONFIG_PATH/ansible/wazuh/certificates/wazuh-certificates/root-ca.pem).
The first login should be as the admin user,
with the opendistro_admin_password password in $KAYOBE_CONFIG_PATH/wazuh-secrets.yml.
This will create the necessary indices.
Troubleshooting
Logs are in /var/log/wazuh-indexer/wazuh.log. There are also logs in the journal.
Wazuh agent playbook is located in $KAYOBE_CONFIG_PATH/ansible/wazuh-agent.yml.
Wazuh agent variables file is located in $KAYOBE_CONFIG_PATH/inventory/group_vars/wazuh-agent/wazuh-agent.
You may need to modify some variables, including:
- wazuh_manager_address
Deploy the Wazuh agents:
kayobe playbook run $KAYOBE_CONFIG_PATH/ansible/wazuh-agent.yml
The Wazuh Agent is deployed to all hosts in the wazuh-agent
inventory group, comprising the seed group
plus the overcloud group (containing all hosts in the
OpenStack control plane).
[wazuh-agent:children]
seed
overcloudThe hosts running Wazuh Agent should automatically be registered and visible within the Wazuh Manager dashboard.
Note
It is good practice to use a Kayobe deploy hook
to automate deployment and configuration of the Wazuh Agent
following a run of kayobe overcloud host configure.
The Wazuh agents should register with the Wazuh manager. This can be verified via the agents page in Wazuh Portal. Check CIS benchmark output in agent section.
For times when you need to upgrade wazuh with elasticsearch to version with opensearch or you just need to deinstall all wazuh components: Wazuh purge script: https://github.com/stackhpc/wazuh-server-purge