# How to Define Seperated Indexes for Different Logs in Filebeat/elk?

To define separate indices for different logs in Filebeat, Logstash, and Elasticsearch, you can use various techniques to route logs to different indices based on their types or other criteria. This separation allows for better organization, querying, and management of your logs.

Here’s a detailed guide on how to configure Filebeat and Logstash to send different logs to separate indices in Elasticsearch:

### **1. Filebeat Configuration**

You can use Filebeat to define multiple inputs and send different logs to Logstash. However, Filebeat alone doesn't control the index naming; it's done in Logstash.

**Example Filebeat Configuration (`filebeat.yml`):**

```yaml
filebeat.inputs:
  - type: log
    paths:
      - /path/to/app1/*.log
    fields:
      type: app1
    fields_under_root: true

  - type: log
    paths:
      - /path/to/app2/*.log
    fields:
      type: app2
    fields_under_root: true

output.logstash:
  hosts: ["localhost:5044"]

```

In this example:

- `type: app1` and `type: app2` are custom fields added to identify the type of logs. This will help in routing logs to different indices in Logstash.

### **2. Logstash Configuration**

In Logstash, you can use conditionals to route data to different indices based on the fields set by Filebeat.

**Example Logstash Configuration (`logstash.conf`):**

```yaml
input {
  beats {
    port => 5044
  }
}

filter {
  # Example filters, adjust as needed for your data
  if [fields][type] == "app1" {
    # Add any specific processing for app1 logs if needed
  } else if [fields][type] == "app2" {
    # Add any specific processing for app2 logs if needed
  }
}

output {
  if [fields][type] == "app1" {
    elasticsearch {
      hosts => ["localhost:9200"]
      index => "app1-logs-%{+YYYY.MM.dd}"
    }
  } else if [fields][type] == "app2" {
    elasticsearch {
      hosts => ["localhost:9200"]
      index => "app2-logs-%{+YYYY.MM.dd}"
    }
  } else {
    elasticsearch {
      hosts => ["localhost:9200"]
      index => "default-logs-%{+YYYY.MM.dd}"
    }
  }

  # Optional: Output to stdout for debugging
  stdout { codec => rubydebug }
}

```

In this example:

- Logs are routed to different indices (`app1-logs-*` and `app2-logs-*`) based on the `type` field set in Filebeat.
- If no `type` field is set, logs are sent to a default index (`default-logs-*`).

### **3. Elasticsearch Configuration**

No special configuration is needed in Elasticsearch for handling multiple indices. However, make sure your index patterns in Kibana match the indices created.

1. **Create Index Patterns in Kibana:**
    - Go to `Management` -> `Index Patterns` in Kibana.
    - Create index patterns for `app1-logs-*` and `app2-logs-*`.
2. **Verify Data:**
    - Use `Discover` in Kibana to view and verify the data from different indices.

### **4. Example Index Pattern Creation in Kibana**

1. **Navigate to Index Patterns:**
    - Go to `Management` -> `Index Patterns` in Kibana.
2. **Create Patterns:**
    - Click on `Create index pattern`.
    - Enter `app1-logs-*` as the pattern to match indices for app1 logs.
    - Create another index pattern for `app2-logs-*`.
3. **Configure Fields and Settings:**
    - Configure the time field and other settings as needed for each index pattern.

### **Summary**

- **Filebeat:** Set up multiple inputs with custom fields to identify different types of logs.
- **Logstash:** Use conditionals in the configuration to route logs to different indices based on these fields.
- **Elasticsearch:** Elasticsearch will handle the indices based on the names defined in Logstash.
- **Kibana:** Create index patterns for each of the indices to visualize and explore the data.

By following these steps, you can effectively manage and organize different log types into separate indices in Elasticsearch, making it easier to analyze and monitor your log data.