HIDS Osek
OSSEC HIDS is a host-based intrusion detection system (HIDS), which is used for security detection, visibility and compliance monitoring. Based on multi-platform agent, it forwards system data (such as log messages, file hashes and detected anomalies) to the central manager, where it is further analyzed and processed, thus generating security alerts. The agent passes the event data to the central manager for analysis through a secure and authenticated channel.
In addition, OSEK HIDS provides a centralized system log server and an agentless configuration monitoring system, which provides security insight into events and changes on agentless devices (such as firewalls, switches, routers, access points and network devices).
1.2p
OpenSCAP is an OVAL (Open Vulnerability Assessment Language) and XCCDF (Extensible Configuration Inventory Description Format) interpreter, which is used to check system configuration and detect fragile applications.
This is a well-known tool for checking security compliance and strengthening the enterprise environment using industry standard security baselines.
1.3 elastic stacking
Elastic Stack is a software suite (Filebeat, Logstash, Elasticsearch, Kibana) for collecting, parsing, indexing, storing, searching and displaying log data. It provides a web front-end, which provides a high-level dashboard view of events and supports advanced analysis and data mining in-depth event data storage.
Second, components
The main components of Wazuh are agents running on each monitored host and server, which analyze data received from agentless sources such as agents and system logs. In addition, the server forwards the event data to the Elasticsearch cluster, where the information is indexed and stored.
2. 1 Wazuh agent
Wazuh agent can run on Windows, Linux, Solaris, BSD and Mac operating systems. It is used to collect different types of system and application data, which are forwarded to Wazuh server through encrypted and authenticated channels. In order to establish this secure channel, a registration process with a unique pre-shared key is used.
Agents can be used to monitor physical servers, virtual machines, and cloud instances such as Amazon AWS, Azure, or Google Cloud. The precompiled proxy installation package can be used for Linux, HP-UX, AIX, Solaris, Windows and Darwin (Mac OS X).
On Unix-based operating systems, the agent runs multiple processes, which communicate with each other through local Unix domain sockets. One of the processes is responsible for sending communication and data to the Wazuh server. On Windows systems, only one proxy process uses mutexes to run multiple tasks.
Different agent tasks or processes monitor the system in different ways (for example, monitoring file integrity, reading system log messages, and scanning system configuration).
The following figure shows the internal tasks and processes that occur at the agent level:
All proxy processes have different purposes and settings. The following are their brief descriptions:
Rootcheck: This process performs several tasks related to detecting rootkit, malware and system anomalies. It also runs some basic security checks on system configuration files.
Log Collector Log Collector: This proxy component is used to read operating system and application log messages, including flat log files, standard Windows event logs and even Windows event channels. You can also configure it to run periodically and capture the output of specific commands.
Syscheck: This process performs file integrity monitoring (FIM) and can also monitor registry keys on Windows systems. It can detect the changes of contents, ownership and other attributes of files, and record the creation and deletion of files. Although it performs FIM scans regularly by default, it can also be configured to communicate with the operating system kernel to detect file changes in real time and generate detailed change reports (diffs) for text files.
OpenSCAP: This module uses the published OVAL (Open Vulnerability Assessment Language) and XCCDF (Extensible Configuration Inventory Description Format) baseline security configuration files. By scanning the system regularly, it can find fragile applications or configurations that do not meet well-known standards, such as those defined in the CIS (Internet Security Center) benchmark test.
Agent daemon: This process receives data generated or collected by all other agent components. It compresses and encrypts data and sends it to the server through an authenticated channel. The process runs in an independent "chroot" (change root) environment, which means that its access to the monitored system is limited. This improves the overall security of the agent because it is the only process that connects to the network.
2.2Wazuh server
The server component is responsible for analyzing the data received from the agent and triggering an alert when the event meets the rules (for example, intrusion detected, file change, configuration out of policy, possible rootkit, etc.). ).
The server usually runs on an independent physical machine, virtual machine or cloud instance, running proxy components, with the purpose of monitoring the server itself. The following is a list of major server components:
Registration service: register new agents by providing and distributing the unique privileged authentication key of each agent. This process runs as a network service and supports authentication through TLS/SSL and/or fixed passwords.
Remote daemon service: This is the service that receives data from the agent. It uses a pre-shared key to verify the identity of each agent and encrypt the communication between the agent and the manager.
Analysis daemon: This is the process that performs data analysis. It uses a decoder to identify the type of information being processed (such as Windows events, SSHD logs, web server logs, etc.). ), and then extract relevant data elements (such as source ip, event id, user, etc.) from the log message. ). Next, by using rules, it can identify a specific pattern in the decoded log records, which may trigger an alarm or even invoke an automatic countermeasure (proactive response), such as an IP ban on the firewall.
RESTful API RESTful API: This provides an interface to manage and monitor the configuration and deployment status of agents. It is also used by a Kibana application called Wazuh web interface.
2.3 elastic stacking
Elastic Stack is a unified and popular open source project of log management, including Elasticsearch, Logstash, Kibana, Filebeat and so on. Projects of particular relevance to the Wazuh solution are:
Elasticsearch: A highly scalable full-text search and analysis engine. Elastic search is distributed, which means that data (index) is divided into fragments, and each fragment can have zero or more copies.
Logstash: a tool for collecting and parsing logs to be saved in the storage system (for example, Elasticsearch). You can also use input, filtering, and output plug-ins to enrich and transform collected events.
Kibana: A flexible and intuitive web interface for mining, analyzing and visualizing data. It runs on the content index of Elasticsearch cluster.
Filebeat: a lightweight repeater used to transmit logs in the network, usually used for Logstash or Elasticsearch.
Wazuh integrates with Elastic Stack to provide a decoded summary of log messages, which will be indexed by Elasticsearch, and a real-time web console for alarm and log data analysis. In addition, the Wazuh user interface (running on Kibana) can be used to manage and monitor your Wazuh infrastructure.
An Elasticsearch index is a collection of documents with some similar characteristics, such as some common fields and data retention requirements. Wazuh uses up to three different indexes every day to store different event types:
Wazuh -alerts: Whenever an event triggers a rule, the Wazuh server generates an alert index.
Wazuh-events: Index of all events (archive data) received from the agent, regardless of whether they trigger the rule.
Wazuh-monitoring: indexes data related to agent status. The web interface uses it to indicate that a single agent is or has been "active", "disconnected" or "never connected".
An index is made up of documents. For the above index, the document is a single alarm, archive event or status event.
The Elasticsearch index is divided into one or more fragments, and each fragment can have one or more copies. Each master film and copy film is a Lucene index. Therefore, an Elasticsearch index consists of many Lucene indexes. When a search is run on the Elasticsearch index, all fragments will be searched in parallel and the results will be merged. The elastic search index is divided into multiple segments and copies, which are used for multi-node elastic search cluster to narrow the search scope and obtain high availability. A single-node Elasticsearch cluster usually has only one shard per index, and there is no copy.
Third, architecture.
Wazuh architecture is based on agents running on monitored hosts, which forward log data to a central server. In addition, it also supports agentless devices (such as firewalls, switches, routers, access points, etc. ), and can actively submit log data through syslog and/or its periodic probes with configuration changes, so as to forward the data to the central server in the future. The central server decodes and analyzes the input information, and sends the results to the Elasticsearch cluster for indexing and storage.
An Elasticsearch cluster is a collection of one or more nodes (servers). These nodes (servers) communicate with each other and read and write indexes. Small Wazuh deployment (
When the Wazuh server and the Elasticsearch cluster are located on different hosts, Filebeat can securely forward Wazuh alerts and/or archive events to the Elasticsearch server using TLS encryption.
The following figure illustrates how the components are distributed when the Wazuh server and the Elasticsearch cluster are running on different hosts. Note that for a multi-node cluster, there will be multiple elastic stack servers to which Filebeat can forward data:
In a smaller Wazuh deployment, both Wazuh and Elastic stacks using a single-node Elasticsearch instance can be deployed on a single server. In this scenario, Logstash can read Wazuh alerts and/or archive events directly from the local file system and provide them to the local Elasticsearch instance.
Fourth, communication and data flow.
4. 1 Communication between Agent and Server
Wazuh agent uses OSSEC message protocol to send the collected events to Wazuh server through port 15 14 (UDP or TCP). Then, the Wazuh server decodes and uses the analysis engine to check the rules of the received event. The event that triggers the rule will add warning data, such as the rule id and the rule name. Depending on whether the rule is triggered or not, events can be stored in one or two of the following files:
The file/var/ossec/logs/archives/archives.json contains all events, regardless of whether they triggered the rule.
The file/var/ossec/logs/alerts/alerts.json contains only the events that triggered the rule.
Wazuh message protocol adopts 192-bit Blowfish encryption, and fully realizes 16 rounds, or AES encryption, each block has 128 bits and the key is 256 bits.
4.2 wazuh- flexible communication
In large-scale deployment, Wazuh server uses Filebeat and TLS encryption to send alarm and event data to loghide (5000/TCP) on elastic stack server. For single-host architecture, Logstash can read events/alerts directly from the local file system without using Filebeat.
Before sending the data to Elasticsearch (port 9200/TCP), Logstash formats the input data and selectively enriches the GeoIP information. Once the data is indexed to Elasticsearch, Kibana (port 560 1/TCP) is used to mine and visualize the information.
Wazuh APP runs inside Kibana, constantly querying Restful API (port 55000/TCP on Wazuh Manager) to display the configuration and status information of servers and agents, and restarting the agents if necessary. This communication is encrypted by TLS and authenticated by user name and password.
Verb (abbreviation for verb) required port
In order to install Wazuh and Elastic stack, several network ports must be available and open so that different components can communicate correctly.
Six, the storage of documents and materials
In addition to being sent to Elasticsearch, alarm and non-alarm events are also stored in files on the Wazuh server. These files can be in JSON format (. JSON) and/or plain text format (log- no decoded fields, but more compact). These files are compressed and signed daily using MD5 and SHA 1 checksums. The directory and file name structure is as follows:
It is suggested that archive files should be rotated and backed up according to the storage capacity of Wazuh Manager server. By using the cron job, you can easily schedule only a specific archive file time window (for example, last year or last three months) to be retained on the Manager.
On the other hand, you can choose not to store archive files at all, and only rely on Elasticsearch to store archive files, especially when running regular Elasticsearch snapshot backups and/or multi-node Elasticsearch clusters with fragmented copies for high availability. You can even use a cron job to move the snapshot index to the final data storage server and sign it using MD5 and SHA 1 algorithms.