使用Logstash将CloudFront日志导入Elasticsearch

使用Logstash将CloudFront日志导入Elasticsearch

Elasticsearch是监视您的AWS CloudFront网站使用情况的好方法。有一些相当简单的途径可以将CloudFront日志传送到托管的Elasticsearch服务,例如Logz.ioAmazon Elasticsearch。这是使用您自己的自托管Elasticsearch和Logstash实例的方法:

  1. 设置CloudFront日志记录
  2. 设置SQS通知
  3. 设置测试Logstash管道
  4. 设置主要的Logstash管道
  5. 查看Kibana中的日志

设置CloudFront日志记录

首先,您需要一个S3存储桶来存储CloudFront日志。您可以使用现有的存储桶,也可以创建一个新的存储桶。您不需要为存储桶设置任何特殊权限-但您可能需要确保默认情况下,存储桶拒绝公共访问其内容。在此示例中,我们将对名为my-log-bucket的日志使用S3存储桶,并将CloudFront日志存储在名为my-cloudfront-logs的存储桶目录下。同样,我们将每个CloudFront发行版的日志存储在该目录各自的子目录中。因此,对于服务于www.example.com域的分发,我们将分发日志存储在my-cloudfront-logs / www.example.com子目录下。

创建并可用的S3日志存储桶之后,更新每个CloudFront发行版以对其进行日志记录。您可以通过以下方式通过AWS控制台执行此操作:编辑发行版,打开“标准日志记录”设置,将“ S3日志存储桶”设置为S3日志存储桶(my-log-bucket.s3.amazonaws.com),然后将“日志前缀”设置为S3存储桶子目录的目录路径,您将在该目录下存储日志(my-cloudfront-logs / www.example.com /)。保存更改,CloudFront每隔几分钟就会将一个新的.gz文件保存到my-log-bucket的my-cloudfront-logs / www.example.com /子目录中(有关详细信息,请参阅CloudFront访问日志文档)。[](https://docs.aws.amazon.com/A...

设置SQS通知

接下来,创建一个新的SQS队列。我们将其称为my-cloudfront-log-notifications,并将在us-east-1 AWS区域中创建它。创建队列时,将其“接收消息的等待时间”设置配置为10秒左右;这将确保SQS客户端不会发出超出所需数量的SQS请求(设置10秒将使该队列的成本降低到不足 $1/每月)。

创建队列时,您唯一需要做的另一件事是向其添加访问策略,该策略允许S3向其发送消息。该策略应如下所示(将my-cloudfront-log-notifications替换为队列名称,将us-east-1替换为队列的区域,将my-log-bucket替换为日志存储桶的名称,并在123456789012替换为AWS帐户ID):

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "AWS": "*"
      },
      "Action": "SQS:SendMessage",
      "Resource": "arn:aws:sqs:us-east-1:123456789012:my-cloudfront-log-notifications",
      "Condition": {
        "StringEquals": {
          "aws:SourceAccount": "123456789012"
        },
        "ArnLike": {
          "aws:SourceArn": "arn:aws:s3:*:*:my-log-bucket"
        }
      }
    }
  ]
}

创建了SQS队列后,更新S3存储桶以将所有对象创建事件发送到该队列。您可以通过选择存储桶并在存储桶“属性”选项卡的“高级设置”部分中打开“事件”块,来通过AWS控制台执行此操作。您可以在此处添加通知;将其命名为 my-cloudfront-log-configuration,选中“所有对象创建事件”复选框,将“前缀”设置为 my-cloudfront-logs/,并将其发送到您的SQS队列my-cloudfront-log-notifications。

或者,您可以通过s3api CLI的put-bucket-notification-configuration命令使用通知配置JSON文件添加具有与上述相同的通知,如下所示:

{
  "QueueConfigurations": [
    {
      "Id": "my-cloudfront-log-configuration",
      "QueueArn": "arn:aws:sqs:us-east-1:123456789012:my-cloudfront-log-notifications",
      "Events": [
        "s3:ObjectCreated:*"
      ],
      "Filter": {
        "Key": {
          "FilterRules": [
            {
              "Name": "prefix",
              "Value": "my-cloudfront-logs/"
            }
          ]
        }
      }
    }
  ]
}

现在,您已经将S3存储桶通知连接到SQS队列,如果您在AWS控制台中查找SQS队列,则在Monitoring选项卡的图表下,您将开始看到每隔几分钟收到的消息。

设置测试Logstash管道

下载示例。从S3桶下载 .gz日志文件,并将其复制到你Logstash的机器。将文件移动到Logstash可以访问的目录,并确保它具有对该文件的读取权限。我们的示例文件将位于 /var/log/my-cloudfront-logs/www.example.com/E123456789ABCD.2020-01-02-03.abcd1234.gz 中。

将以下my-cloudfront-pipeline.conf文件复制到Logstash机器上的/etc/logstash/conf.d目录中(用示例.gz日志文件替换输入路径),在Logstash日志尾部添加日志(journalctl -u logstash- f如果使用systemd进行管理),然后重新启动Logstash服务(sudo systemctl restart logstash):

# /etc/logstash/conf.d/my-cloudfront-pipeline.conf
input {
  file {
    file_completed_action => "log"
    file_completed_log_path => "/var/lib/logstash/cloudfront-completed.log"
    mode => "read"
    path => "/var/log/my-cloudfront-logs/www.example.com/E123456789ABCD.2020-01-02-03.abcd1234.gz"
    sincedb_path => "/var/lib/logstash/cloudfront-since.db"
    type => "cloudfront"
  }
}

filter {
  if [type] == "cloudfront" {
    if (("#Version: 1.0" in [message]) or ("#Fields: date" in [message])) {
      drop {}
    }

    mutate {
      rename => {
        "type" => "[@metadata][type]"
      }
      # strip dashes that indicate empty fields
      gsub => ["message", "\t-(?=\t)", "    "] # literal tab
    }

#Fields: date time x-edge-location sc-bytes c-ip cs-method cs(Host) cs-uri-stem sc-status cs(Referer) cs(User-Agent) cs-uri-query cs(Cookie) x-edge-result-type x-edge-request-id x-host-header cs-protocol cs-bytes time-taken x-forwarded-for ssl-protocol ssl-cipher x-edge-response-result-type cs-protocol-version fle-status fle-encrypted-fields c-port time-to-first-byte x-edge-detailed-result-type sc-content-type sc-content-len sc-range-start sc-range-end
    csv {
      separator => "    " # literal tab
      columns => [
        "date",
        "time",
        "x_edge_location",
        "sc_bytes",
        "c_ip",
        "cs_method",
        "cs_host",
        "cs_uri_stem",
        "sc_status",
        "cs_referer",
        "cs_user_agent",
        "cs_uri_query",
        "cs_cookie",
        "x_edge_result_type",
        "x_edge_request_id",
        "x_host_header",
        "cs_protocol",
        "cs_bytes",
        "time_taken",
        "x_forwarded_for",
        "ssl_protocol",
        "ssl_cipher",
        "x_edge_response_result_type",
        "cs_protocol_version",
        "fle_status",
        "fle_encrypted_fields",
        "c_port",
        "time_to_first_byte",
        "x_edge_detailed_result_type",
        "sc_content_type",
        "sc_content_len",
        "sc_range_start",
        "sc_range_end"
      ]
      convert => {
        "c_port" => "integer"
        "cs_bytes" => "integer"
        "sc_bytes" => "integer"
        "sc_content_len" => "integer"
        "sc_range_end" => "integer"
        "sc_range_start" => "integer"
        "sc_status" => "integer"
        "time_taken" => "float"
        "time_to_first_byte" => "float"
      }
      add_field => {
        "datetime" => "%{date} %{time}"
        "[@metadata][document_id]" => "%{x_edge_request_id}"
      }
      remove_field => ["cloudfront_fields", "cloudfront_version", "message"]
    }

    # parse datetime
    date {
      match => ["datetime", "yy-MM-dd HH:mm:ss"]
      remove_field => ["datetime", "date", "time"]
    }

    # lookup geolocation of client ip address
    geoip {
      source => "c_ip"
      target => "geo"
    }

    # parse user-agent into subfields
    urldecode {
      field => "cs_user_agent"
    }
    useragent {
      source => "cs_user_agent"
      target => "ua"
      add_field => {
        "user_agent.name" => "%{[ua][name]}"
        "user_agent.version" => "%{[ua][major]}"
        "user_agent.device.name" => "%{[ua][device]}"
        "user_agent.os.name" => "%{[ua][os_name]}"
        "user_agent.os.version" => "%{[ua][os_major]}"
      }
      remove_field => ["cs_user_agent", "ua"]
    }

    # pull logfile path from s3 metadata, if present
    if [@metadata][s3][object_key] {
      mutate {
        add_field => {
          "path" => "%{[@metadata][s3][object_key]}"
        }
      }
    }

    # strip directory path from logfile path, and canonicalize field name
    mutate {
      rename => {
        "path" => "log.file.path"
      }
      gsub => ["log.file.path", ".*/", ""]
      remove_field => "host"
    }

    # canonicalize field names, and drop unwanted fields
    mutate {
      rename => {
        "c_ip" => "client.ip"
        "cs_bytes" => "http.request.bytes"
        "sc_content_len" => "http.response.body.bytes"
        "sc_content_type" => "http.response.body.type"
        "cs_method" => "http.request.method"
        "cs_protocol" => "url.scheme"
        "cs_protocol_version" => "http.version"
        "cs_referer" => "http.request.referrer"
        "cs_uri_query" => "url.query"
        "cs_uri_stem" => "url.path"
        "sc_bytes" => "http.response.bytes"
        "sc_status" => "http.response.status_code"
        "ssl_cipher" => "tls.cipher"
        "ssl_protocol" => "tls.protocol_version"
        "x_host_header" => "url.domain"
      }
      gsub => [
        "http.version", "HTTP/", "",
        "tls.protocol_version", "TLSv", ""
      ]
      remove_field => [
        "c_port",
        "cs_cookie",
        "cs_host",
        "fle_encrypted_fields",
        "fle_status",
        "sc_range_end",
        "sc_range_start",
        "x_forwarded_for"
      ]
    }
  }
}

output {
  stdout {
    codec => "rubydebug"
  }
}

您应该在Logstash日志中看到很多条目,如下所示,示例日志文件中的每个条目都带有一个条目(请注意,每次运行此字段时,它们的显示顺序将不同):

Jan 02 03:04:05 logs1 logstash[12345]: {
Jan 02 03:04:05 logs1 logstash[12345]:     "x_edge_detailed_result_type" => "Hit",
Jan 02 03:04:05 logs1 logstash[12345]:                      "@timestamp" => 2020-01-02T03:01:02.000Z,
Jan 02 03:04:05 logs1 logstash[12345]:          "user_agent.device.name" => "EML-AL00",
Jan 02 03:04:05 logs1 logstash[12345]:                      "time_taken" => 0.001,
Jan 02 03:04:05 logs1 logstash[12345]:                    "http.version" => "2.0",
Jan 02 03:04:05 logs1 logstash[12345]:           "user_agent.os.version" => "8",
Jan 02 03:04:05 logs1 logstash[12345]:        "http.response.body.bytes" => nil,
Jan 02 03:04:05 logs1 logstash[12345]:                      "tls.cipher" => "ECDHE-RSA-AES128-GCM-SHA256",
Jan 02 03:04:05 logs1 logstash[12345]:             "http.response.bytes" => 2318,
Jan 02 03:04:05 logs1 logstash[12345]:                        "@version" => "1",
Jan 02 03:04:05 logs1 logstash[12345]:              "time_to_first_byte" => 0.001,
Jan 02 03:04:05 logs1 logstash[12345]:             "http.request.method" => "GET",
Jan 02 03:04:05 logs1 logstash[12345]:               "x_edge_request_id" => "s7lmJasUXiAm7w2oR34Gfg5zTgeQSTkYwiYV1pnz5Hzv8mRmBzyGrw==",
Jan 02 03:04:05 logs1 logstash[12345]:                   "log.file.path" => "EML9FBPJY2494.2020-01-02-03.abcd1234.gz",
Jan 02 03:04:05 logs1 logstash[12345]:              "x_edge_result_type" => "Hit",
Jan 02 03:04:05 logs1 logstash[12345]:              "http.request.bytes" => 388,
Jan 02 03:04:05 logs1 logstash[12345]:           "http.request.referrer" => "http://baidu.com/",
Jan 02 03:04:05 logs1 logstash[12345]:                       "client.ip" => "192.0.2.0",
Jan 02 03:04:05 logs1 logstash[12345]:                 "user_agent.name" => "UC Browser",
Jan 02 03:04:05 logs1 logstash[12345]:              "user_agent.version" => "11",
Jan 02 03:04:05 logs1 logstash[12345]:                       "url.query" => nil,
Jan 02 03:04:05 logs1 logstash[12345]:         "http.response.body.type" => "text/html",
Jan 02 03:04:05 logs1 logstash[12345]:                      "url.domain" => "www.example.com",
Jan 02 03:04:05 logs1 logstash[12345]:                 "x_edge_location" => "LAX50-C3",
Jan 02 03:04:05 logs1 logstash[12345]:       "http.response.status_code" => 200,
Jan 02 03:04:05 logs1 logstash[12345]:                             "geo" => {
Jan 02 03:04:05 logs1 logstash[12345]:                     "ip" => "192.0.2.0",
Jan 02 03:04:05 logs1 logstash[12345]:            "region_name" => "Shanghai",
Jan 02 03:04:05 logs1 logstash[12345]:           "country_name" => "China",
Jan 02 03:04:05 logs1 logstash[12345]:               "timezone" => "Asia/Shanghai",
Jan 02 03:04:05 logs1 logstash[12345]:              "longitude" => 121.4012,
Jan 02 03:04:05 logs1 logstash[12345]:          "country_code3" => "CN",
Jan 02 03:04:05 logs1 logstash[12345]:               "location" => {
Jan 02 03:04:05 logs1 logstash[12345]:             "lon" => 121.4012,
Jan 02 03:04:05 logs1 logstash[12345]:             "lat" => 31.0449
Jan 02 03:04:05 logs1 logstash[12345]:         },
Jan 02 03:04:05 logs1 logstash[12345]:            "region_code" => "SH",
Jan 02 03:04:05 logs1 logstash[12345]:          "country_code2" => "CN",
Jan 02 03:04:05 logs1 logstash[12345]:         "continent_code" => "AS",
Jan 02 03:04:05 logs1 logstash[12345]:               "latitude" => 31.0449
Jan 02 03:04:05 logs1 logstash[12345]:     },
Jan 02 03:04:05 logs1 logstash[12345]:                      "url.scheme" => "https",
Jan 02 03:04:05 logs1 logstash[12345]:            "tls.protocol_version" => "1.2",
Jan 02 03:04:05 logs1 logstash[12345]:              "user_agent.os.name" => "Android",
Jan 02 03:04:05 logs1 logstash[12345]:     "x_edge_response_result_type" => "Hit",
Jan 02 03:04:05 logs1 logstash[12345]:                        "url.path" => "/"
Jan 02 03:04:05 logs1 logstash[12345]: }

这些条目向您显示Logstash将其连接到什么后将推送到Elasticsearch。您可以调整此my-cloudfront-pipeline.conf文件,并一次又一次重新启动Logstash,直到获得要推送到Elasticsearch的确切字段名称和值为止。

让我们分别看一下管道的每个部分。

在输入部分,我们使用文件输入仅读取一个示例文件:

input {
  file {
    file_completed_action => "log"
    file_completed_log_path => "/var/lib/logstash/cloudfront-completed.log"
    mode => "read"
    path => "/var/log/my-cloudfront-logs/www.example.com/E123456789ABCD.2020-01-02-03.abcd1234.gz"
    sincedb_path => "/var/lib/logstash/cloudfront-since.db"
    type => "cloudfront"
  }
}

这里的关键点是我们将类型字段设置为cloudfront,我们将在下面的过滤器部分中使用该字段将过滤逻辑仅应用于此类型的条目。如果您只打算在此管道中处理CloudFront日志文件,则可以省略管道中所有与“类型”有关的位,这将简化它。

在过滤器部分,第一步是检查type字段是否设置为“ cloudfront”,并且仅在以下情况下才执行过滤器块的其余部分:

filter {
  if [type] == "cloudfront" {

然后,筛选器部分的下一步是在每个CloudFront日志文件中删除两条标题行,第一行以#Version开头,第二行以#Fields开头:


    if (("#Version: 1.0" in [message]) or ("#Fields: date" in [message])) {
      drop {}
    }

之后,下一步将类型字段重命名为[@metadata] [type],这样就不会将其推入Elasticsearch索引。我选择使用仅用于CloudFront日志的Elasticsearch索引;但是,如果要将CloudFront日志推送到与其他数据共享的索引中,则可能需要保留type字段。

  mutate {
      rename => {
        "type" => "[@metadata][type]"
      }

此mutate过滤器的后半部分去除了-字符,这些字符指示日志条目中所有列的空字段值。请注意,此gsub函数的最后一个参数是文字制表符-确保您的文本编辑器不会将其转换为空格!

# strip dashes that indicate empty fields gsub => ["message", "t-(?=t)", " "] # literal tab }

例如,它将转换这样的条目:

2020-01-02 03:03:03 HIO50-C1 6564 192.0.2.0 GET d2c4n4ttot8c65.cloudfront.net / 200 - Mozilla/5.0%20(Windows%20NT%206.1;%20WOW64;%20rv:40.0)%20Gecko/20100101%20Firefox/40.1 - - Miss nY0knXse4vDxS5uOBe3YAhDpH809bqhsILUUFAtE_4ZLlfXCiYcD0A== www.example.com https 170 0.164 - TLSv1.2 ECDHE-RSA-AES128-GCM-SHA256 Miss HTTP/1.1 - - 62684 0.164 Miss text/html 6111 - -

为此(删除表示空值的破折号,但不删除非空值(如日期或密码套件)中的破折号):

2020-01-02 03:03:03 HIO50-C1 6564 192.0.2.0 GET d2c4n4ttot8c65.cloudfront.net / 200 Mozilla/5.0%20(Windows%20NT%206.1;%20WOW64;%20rv:40.0)%20Gecko/20100101%20Firefox/40.1 Miss nY0knXse4vDxS5uOBe3YAhDpH809bqhsILUUFAtE_4ZLlfXCiYcD0A== www.example.com https 170 0.164 TLSv1.2 ECDHE-RSA-AES128-GCM-SHA256 Miss HTTP/1.1 62684 0.164 Miss text/html 6111

下一步是该过程的关键,使用csv过滤器将每个制表符分隔的日志行转换为命名字段。请注意,分隔符属性值也是文字制表符:

#Fields: date time x-edge-location sc-bytes c-ip cs-method cs(Host) cs-uri-stem sc-status cs(Referer) cs(User-Agent) cs-uri-query cs(Cookie) x-edge-result-type x-edge-request-id x-host-header cs-protocol cs-bytes time-taken x-forwarded-for ssl-protocol ssl-cipher x-edge-response-result-type cs-protocol-version fle-status fle-encrypted-fields c-port time-to-first-byte x-edge-detailed-result-type sc-content-type sc-content-len sc-range-start sc-range-end csv { separator => " " # literal tab columns => [ "date", "time", "x_edge_location", "sc_bytes", "c_ip", "cs_method", "cs_host", "cs_uri_stem", "sc_status", "cs_referer", "cs_user_agent", "cs_uri_query", "cs_cookie", "x_edge_result_type", "x_edge_request_id", "x_host_header", "cs_protocol", "cs_bytes", "time_taken", "x_forwarded_for", "ssl_protocol", "ssl_cipher", "x_edge_response_result_type", "cs_protocol_version", "fle_status", "fle_encrypted_fields", "c_port", "time_to_first_byte", "x_edge_detailed_result_type", "sc_content_type", "sc_content_len", "sc_range_start", "sc_range_end" ] }

该列属性列表了每个字段的名称,为了。在本管道的稍后部分,我们将重命名这些字段中的许多字段以使用ECS命名法,但是为了清楚起见,此步骤使用CloudFront定义的字段名称。

csv过滤器的中间部分通过转换属性映射将数字字段转换为实际数字:

convert => { "c_port" => "integer" "cs_bytes" => "integer" "sc_bytes" => "integer" "sc_content_len" => "integer" "sc_range_end" => "integer" "sc_range_start" => "integer" "sc_status" => "integer" "time_taken" => "float" "time_to_first_byte" => "float" }

csv过滤器的add_field部分将各个日期和时间字段组合为一个组合的datetime字段(稍后将转换为timestamp对象);并将x_edge_request_id字段值复制为[@metadata] [document_id]字段:

add_field => { "datetime" => "%{date} %{time}" "[@metadata][document_id]" => "%{x_edge_request_id}" }

在[@metadata] [DOCUMENT_ID]场将在稍后当我们推记录Elasticsearch上使用(用作记录的ID)。与[@metadata] [type]字段类似,这是另一种情况,如果您仅要在此管道中处理CloudFront日志文件,则可以忽略此额外的元数据字段,而在配置Elasticsearch时直接使用x_edge_request_id记录ID。

解析日志条目后,csv过滤器的最后一部分将删除一些多余的字段:message(完整的日志条目文本本身)以及cloudfront_fields和cloudfront_version(稍后将添加的s3snssqs输入将自动包括在内):

remove_field => ["cloudfront_fields", "cloudfront_version", "message"] }

下一步过滤器是将datetime字段(从上面的date和time字段创建)转换为适当的datetime对象:

# parse datetime date { match => ["datetime", "yy-MM-dd HH:mm:ss"] remove_field => ["datetime", "date", "time"] }

这会将datetime设置为@timestamp字段的值。我们还将删除datetime,date和time字段,因为由于@timestamp字段中已解析了datetime,因此现在不再需要它们。

下一个过滤器使用客户端IP地址来查找客户端的可能物理位置:

# lookup geolocation of client ip address geoip { source => "c_ip" target => "geo" }

这将创建一个具有多个子字段(例如[geo] [country_name],[geo] [city_name]等)的地理字段,其中包含可能的位置详细信息。注意,许多子域的许多IP地址都没有映射值。有关更多详细信息,请参见Geoip过滤器文档。[](https://www.elastic.co/guide/...

下一个过滤器对用户代理字段进行解码,然后对它进行解析。所述用户代理滤波器解析cs_user_agent字段到UA字段,其中,像所述地理位置字段,将包含一束子场。我们将抽取其中一些子字段,并为其添加带有ECS名称的字段:

# parse user-agent into subfields urldecode { field => "cs_user_agent" } useragent { source => "cs_user_agent" target => "ua" add_field => { "user_agent.name" => "%{[ua][name]}" "user_agent.version" => "%{[ua][major]}" "user_agent.device.name" => "%{[ua][device]}" "user_agent.os.name" => "%{[ua][os_name]}" "user_agent.os.version" => "%{[ua][os_major]}" } remove_field => ["cs_user_agent", "ua"] }

由于我们现在想要的用户代理信息位于那些新添加的user_agent。*字段中,因此useragent过滤器的最后一部分将删除cs_user_agent字段和中间ua字段。

当使用文件输入时,就像我们正在测试此管道时一样,文件输入将为每个记录添加一个路径字段,其中包含读取文件的路径。稍后,当我们使用s3snssqs输入时,s3snssqs输入将通过与[@metadata] [s3] [object_key]字段相同的路径。因此,无论我们使用哪个输入,我们都可以均匀地访问此值,我们接下来的过滤步骤是,如果存在[@metadata] [s3] [object_key]字段,则将路径字段设置为[@metadata] ] [s3] [object_key]字段的值:

# pull logfile path from s3 metadata, if present if [@metadata][s3][object_key] { mutate { add_field => { "path" => "%{[@metadata][s3][object_key]}" } } }

随着路径现在场包含的文件路径,无论输入的,我们使用一个过滤器砍路径下降到只有日志文件名(如E123456789ABCD.2020-01-02-03.abcd1234.gz):

# strip directory path from logfile path, and canonicalize field name mutate { rename => { "path" => "log.file.path" } gsub => ["log.file.path", ".*/", ""] remove_field => "host" }

我们还具有过滤器,将路径字段重命名为log.file.path(其规范的ECS名称);并让过滤器删除host字段(根据运行Logstash的主机添加文件输入以及path字段,我们并不在意将其作为Elasticsearch中的日志记录的一部分)。

我们管道中的最后一个过滤器重命名了具有等效ECS(弹性通用模式)字段名称的所有CloudFront字段:

# canonicalize field names, and drop unwanted fields mutate { rename => { "c_ip" => "client.ip" "cs_bytes" => "http.request.bytes" "sc_content_len" => "http.response.body.bytes" "sc_content_type" => "http.response.body.type" "cs_method" => "http.request.method" "cs_protocol" => "url.scheme" "cs_protocol_version" => "http.version" "cs_referer" => "http.request.referrer" "cs_uri_query" => "url.query" "cs_uri_stem" => "url.path" "sc_bytes" => "http.response.bytes" "sc_status" => "http.response.status_code" "ssl_cipher" => "tls.cipher" "ssl_protocol" => "tls.protocol_version" "x_host_header" => "url.domain" }

为了匹配ECS字段规范,过滤器的中间部分从http.version字段值中删除了HTTP /前缀(将HTTP / 2.0之类的值转换为2.0);并去除TLSv从前缀tls.protocol_version字段值(转换值等TLSv1.2工作只是1.2):

gsub => [ "http.version", "HTTP/", "", "tls.protocol_version", "TLSv", "" ]

最后,过滤器的最后一部分删除了我们不关心的其他CloudFront字段:

remove_field => [ "c_port", "cs_cookie", "cs_host", "fle_encrypted_fields", "fle_status", "sc_range_end", "sc_range_start", "x_forwarded_for" ] } } }

管道的输出部分仅将每个日志记录输出到Logstash自己的日志输出中-这是在拖尾Logstash日志时看到的内容:

output { stdout { codec => "rubydebug" } }

设置主要的Logstash管道

一旦让您的测试管道满意为止,就该更改管道的输出部分,以将输出推送到Elasticsearch。用该块替换/etc/logstash/conf.d/my-cloudfront-pipeline.conf文件的输出块(替换您自己的host,user和password设置,以及您需要的任何自定义SSL设置-请参阅Elasticsearch输出插件文档以了解详细信息):[](https://www.elastic.co/guide/...

output { # don't try to index anything that didn't get a document_id if [@metadata][document_id] { elasticsearch { hosts => ["https://elasticsearch.example.com:9243"] user => "elastic" password => "password123" document_id => "%{[@metadata][document_id]}" ecs_compatibility => "v1" index => "ecs-logstash-%{[@metadata][type]}-%{+YYYY.MM.dd}" } } }

此块中的以下代码行起到了进一步的保护作用,避免为未正确解析的任何内容建立索引(您可能希望将此类日志条目发送到专用的错误索引,以关注未能解析的条目):

if [@metadata][document_id] {

此行使用[@metadata] [document_id]字段设置每个条目的记录ID(在管道过滤器中调用,我们将CloudFront x_edge_request_id的值(对于每个请求应该唯一)复制到[@metadata ] [document_id]字段):

document_id => "%{[@metadata][document_id]}"

并且由于我们的输出块包括将ecs_compatibility设置为v1,这指示Logstash使用与ECS兼容的索引模板,因此此行指示Logstash为每天和处理的日志条目类型创建一个单独的索引:

index => "ecs-logstash-%{[@metadata][type]}-%{+YYYY.MM.dd}"

例如,如果我们处理2020年1月2日的CloudFront日志条目,Logstash将创建一个名为ecs-logstash-cloudfront-2020.01.02的索引(如果已有,则使用具有该名称的现有索引)。

更改输出块后,重新启动Logstash。在Logstash自己的日志输出中,您应该看到指示与您的Elasticsearch主机成功连接的条目,以及该条目安装在Elasticsearch中的索引模板的繁琐条目。一旦看到,请检查您的Elasticsearch实例-您应该看到创建了一个新的ecs-logstash-cloudfront-YYYY.MM.DD索引,其中包含示例CloudFront日志文件中的条目。

您可以使用相同的机制将现有的CloudFront日志文件回填到Elastic search中-手动下载日志文件以回填到Logstash机器上(例如通过s3 CLI的sync命令),并自定义文件输入块的path属性(使用通配符)以指示Logstash读取它们。

但是,对于将来的CloudFront日志文件,我们将对管道进行另一处更改,并在CloudFront发布它们之后通过SNS / SQS输入(即s3snssqs)使用S3从S3中提取CloudFront日志文件。

首先,为要使用的Logstash计算机创建一个新的IAM策略,以使其既可以从日志存储桶中读取,也可以从我们上面设置的SQS队列中读取和删除项目。该策略应如下所示(将Resource元素更改为指向您自己的S3日志存储桶和SQS日志队列,在本文的前两部分中进行了设置):

{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": "s3:ListBucket", "Resource": "arn:aws:s3:::my-log-bucket" }, { "Effect": "Allow", "Action": "s3:GetObject", "Resource": "arn:aws:s3:::my-log-bucket/my-cloudfront-logs/*" }, { "Effect": "Allow", "Action": [ "sqs:Get*", "sqs:List*", "sqs:ReceiveMessage", "sqs:ChangeMessageVisibility", "sqs:DeleteMessage" ], "Resource": [ "arn:aws:sqs:us-east-1:123456789012:my-cloudfront-log-notifications" ] } ] }

然后在Logstash机器上安装logstash-input-s3-sns-sqs插件:

cd /usr/share/logstash sudo -u logstash bin/logstash-plugin install logstash-input-s3-sns-sqs

然后将管道的输入部分更新为以下内容(替换您自己的SQS队列名称及其AWS区域):

input { # pull new logfiles from s3 when notified s3snssqs { region => "us-east-1" queue => "my-cloudfront-log-notifications" from_sns => false type => "cloudfront" } }

如果您在AWS中运行Logstash计算机,则可以使用常规的EC2实例配置文件IAM角色执行任务,以授予计算机对您在上面创建的策略的访问权限。否则,您还需要向s3snssqs输入中添加一些AWS凭证设置;咨询S3的输入插件的选项文档(该s3snssqs输入允许同一AWS凭证选项为S3输入呢,但S3输入具有更好的文档他们的)。

现在重新启动Logstash。您应该在Logstash自己的日志中看到与以前相同的输出;但是如果您选择Elasticsearch,则应该看到正在添加的新记录。

查看Kibana中的日志

最终,您将需要在Kibana中为新的CloudFront数据创建精美的仪表板;但目前,我们只是从设置清单开始,您可以在Kibana的“发现”部分中查看它们。

首先登录Kibana,然后导航到Kibana的“管理”>“堆栈管理”部分。在“堆栈管理”部分中,如果导航到“数据”>“索引管理”子部分,则应该看到一堆新的索引,它们以ecs-logstash-cloudfront-YYYY.MM.DD的形式命名(例如ecs -logstash-cloudfront-2020.01.01等):

验证Kibana可以看到索引后,导航至“ Kibana”>“索引模式”小节,然后单击“创建索引模式”按钮。将ecs-logstash-cloudfront- *指定为模式,然后选择@timestamp作为时间字段:

创建新的索引模式后,从Kibana的“堆栈管理”部分导航到主“ Kibana”>“发现”部分。这将显示您最近的“发现”搜索。在页面左侧,将选定的索引模式更改为刚创建的模式(ecs-logstash-cloudfront- *)。现在,您应该看到列出了最新的CloudFront条目(如果没有,请使用页面右上方的时间窗口选择器来扩展时间窗口,以包含您知道应该包含某些条目的范围)。您可以使用此页面为CloudFront日志创建包含自定义列和自定义过滤器设置的列表:

贾斯汀·路德维格 (Justin Ludwig) 在 下午3:33发布 [](https://blog.swwomm.com/2020/... "永久链接")

标签: AWSCloudFront的elasticsearchkibanalogstashS3SQS

没意见:

发表评论

[](https://www.blogger.com/comme...

较新的帖子较早的帖子

订阅: 发表评论(Atom)

SWWOMM主页

关于我

我的照片

贾斯汀·路德维希(JUSTIN LUDWIG)

美国华盛顿州西雅图

我是Pro Custodibus(一种用于保护公司内部网络安全的Wireguard VPN管理工具)制造商Arcem Tene的创始人。

查看我的完整个人资料

搜索此博客

博客存档

标签

android (2) ansible (3)antora (2)apache (5)apparmor (1)appengine (1)apt (1)archiva (1)asciidoc (1)自动化(1)aws (8)证书(4)切诺基( 2)chrome (1)cloudfront (2)color (1)约束(1)Cookies (1)cron (1)加密(1)d3 (1)守护程序(1)dd-wrt (2)调试 [](https://blog.swwomm.com/searc... [](https://blog.swwomm.com/searc...://blog.swwomm.com/search/label/apache)[](https://blog.swwomm.com/searc...://blog.swwomm.com/search/label/appengine)[](https://blog.swwomm.com/searc...://blog.swwomm.com/search/label/archiva)[](https://blog.swwomm.com/searc...://blog.swwomm.com/search/label/automation)[](https://blog.swwomm.com/searc...://blog.swwomm.com/search/label/certificates)[](https://blog.swwomm.com/searc...://blog.swwomm.com/search/label/chrome)[](https://blog.swwomm.com/searc...://blog.swwomm.com/search/label/color)[](https://blog.swwomm.com/searc...://blog.swwomm.com/search/label/cookies)[](https://blog.swwomm.com/searc...://blog.swwomm.com/search/label/cryptography)[](https://blog.swwomm.com/searc...://blog.swwomm.com/search/label/daemon)[](https://blog.swwomm.com/searc...://blog.swwomm.com/search/label/debug)(1) 的diff (1)django的(1)DKIM (1)DNS (3)搬运工(5)的机器人(1)EC2 (6)胞外(3)ed25519 (1)elasticsearch (1)ELB (1)酏剂(5 )电子邮件(2)加密(1)加密(1)交换(1)ext4 (1)fedora (1)文件(1)文件系统(3)firefox (2)firejail (1)flash (1)形式(1) [](https://blog.swwomm.com/searc... [](https://blog.swwomm.com/searc...://blog.swwomm.com/search/label/dkim)[](https://blog.swwomm.com/searc...://blog.swwomm.com/search/label/docker)[](https://blog.swwomm.com/searc...://blog.swwomm.com/search/label/ec2)[](https://blog.swwomm.com/searc...://blog.swwomm.com/search/label/ed25519)[](https://blog.swwomm.com/searc...://blog.swwomm.com/search/label/elb)[](https://blog.swwomm.com/searc...://blog.swwomm.com/search/label/email)[](https://blog.swwomm.com/searc...://blog.swwomm.com/search/label/encryption)[](https://blog.swwomm.com/searc...://blog.swwomm.com/search/label/ext4)[](https://blog.swwomm.com/searc...://blog.swwomm.com/search/label/file)[](https://blog.swwomm.com/searc...://blog.swwomm.com/search/label/firefox)[](https://blog.swwomm.com/searc...://blog.swwomm.com/search/label/flash)[](https://blog.swwomm.com/searc...g1 (1) geb (1)genserver (1)ghostdriver (1)git (4)golang (1)gpg (1)grails (10)groovy (4)html (2)http (5)ie (3)ipv6 ( 1)irc (1)Java (7)javascript (3)jenkins (2)jetpack (1)jetty (3)json (1)jstack (1)kibana (1)letsencrypt (1)libsodium (1)lightning [](https://blog.swwomm.com/searc... [](https://blog.swwomm.com/searc...://blog.swwomm.com/search/label/ghostdriver)[](https://blog.swwomm.com/searc...://blog.swwomm.com/search/label/golang)[](https://blog.swwomm.com/searc...://blog.swwomm.com/search/label/grails)[](https://blog.swwomm.com/searc...://blog.swwomm.com/search/label/html)[](https://blog.swwomm.com/searc...://blog.swwomm.com/search/label/ie)[](https://blog.swwomm.com/searc...://blog.swwomm.com/search/label/irc)[](https://blog.swwomm.com/searc...://blog.swwomm.com/search/label/javascript)[](https://blog.swwomm.com/searc...://blog.swwomm.com/search/label/jetpack)[](https://blog.swwomm.com/searc...://blog.swwomm.com/search/label/json)[](https://blog.swwomm.com/searc...://blog.swwomm.com/search/label/kibana)[](https://blog.swwomm.com/searc...://blog.swwomm.com/search/label/libsodium)[](https://blog.swwomm.com/searc...(1) Linux (7)负载均衡器(1)log4j (1)日志记录(2)登录(2)logstash (2)lucene (1)make (1)maven (1)元编程(1)mime (1)mysql (5 )openajax (1)openrc (1)openssl (2)orm (2)ossec (1)perl (1)phantomjs (1)phoenix (1)电话(1)插件(1)png (1)pngcrush [](https://blog.swwomm.com/searc... [](https://blog.swwomm.com/searc...://blog.swwomm.com/search/label/log4j)[](https://blog.swwomm.com/searc...://blog.swwomm.com/search/label/login)[](https://blog.swwomm.com/searc...://blog.swwomm.com/search/label/lucene)[](https://blog.swwomm.com/searc...://blog.swwomm.com/search/label/maven)[](https://blog.swwomm.com/searc...://blog.swwomm.com/search/label/mime)[](https://blog.swwomm.com/searc...://blog.swwomm.com/search/label/openajax)[](https://blog.swwomm.com/searc...://blog.swwomm.com/search/label/openssl)[](https://blog.swwomm.com/searc...://blog.swwomm.com/search/label/ossec)[](https://blog.swwomm.com/searc...://blog.swwomm.com/search/label/phantomjs)[](https://blog.swwomm.com/searc...://blog.swwomm.com/search/label/phone)[](https://blog.swwomm.com/searc...://blog.swwomm.com/search/label/png)[](https://blog.swwomm.com/searc...(1) postgresql (2)电力线(1)原型(1)python (4)rds (3)route53 (1)路由器(2)s3 (3)搜索(1)安全性(3)slf4j (1)smtp (1 )弹簧(1)sql (5)sqs (1)ssl (7)svg (1)svn (2)systemd (2)taglibs (1)终端(1)测试(2)线程(1)雷鸟(1) [](https://blog.swwomm.com/searc... [](https://blog.swwomm.com/searc...://blog.swwomm.com/search/label/prototype)[](https://blog.swwomm.com/searc...://blog.swwomm.com/search/label/rds)[](https://blog.swwomm.com/searc...://blog.swwomm.com/search/label/router)[](https://blog.swwomm.com/searc...://blog.swwomm.com/search/label/search)[](https://blog.swwomm.com/searc...://blog.swwomm.com/search/label/slf4j)[](https://blog.swwomm.com/searc...://blog.swwomm.com/search/label/spring)[](https://blog.swwomm.com/searc...://blog.swwomm.com/search/label/sqs)[](https://blog.swwomm.com/searc...://blog.swwomm.com/search/label/svg)[](https://blog.swwomm.com/searc...://blog.swwomm.com/search/label/systemd)[](https://blog.swwomm.com/searc...://blog.swwomm.com/search/label/terminal)[](https://blog.swwomm.com/searc...://blog.swwomm.com/search/label/threads)[](https://blog.swwomm.com/searc...时区 (1) tmobile (1)Trendnet (2)tweakpng (1)Ubuntu (11)unicode (1)verizon (1)vi (1)virtualbox (1)vpc (2)web (1)webapp (2)weechat ( 1)wifi (2)wsgi (1)xml (1)xmonad (1)yassl (1)[](https://blog.swwomm.com/searc...://blog.swwomm.com/search/label/trendnet)[](https://blog.swwomm.com/searc...://blog.swwomm.com/search/label/ubuntu)[](https://blog.swwomm.com/searc...://blog.swwomm.com/search/label/verizon)[](https://blog.swwomm.com/searc...://blog.swwomm.com/search/label/virtualbox)[](https://blog.swwomm.com/searc...://blog.swwomm.com/search/label/web)[](https://blog.swwomm.com/searc...://blog.swwomm.com/search/label/weechat)[](https://blog.swwomm.com/searc...://blog.swwomm.com/search/label/wsgi)[](https://blog.swwomm.com/searc...://blog.swwomm.com/search/label/xmonad)[](https://blog.swwomm.com/searc...

订阅到

帖子

评论

你可能感兴趣的:(使用Logstash将CloudFront日志导入Elasticsearch)