applog应用程序和fluent-bit共享磁盘,日志内容是json格式数据,输出到S3也是JSON格式

applog应用部分在applog目录:

Dockerfile文件内容
FROM alpine
RUN mkdir -p /data/logs/
COPY testlog.sh /bin/
RUN chmod 777 /bin/testlog.sh
ENTRYPOINT [“/bin/testlog.sh”]

testlog.sh文件内容
#!/bin/sh
while :
do
echo “{\”server_date\”:\”2020-01-19\”,\”hostname\”:\”ip-172-31-43-24.cn-northwest-1.compute.internal\”,\”pid\”:5404,\”method\”:\”POST\”,\”clientIP\”:\”10.11.12.13\”,\”countryCode\”:\”ID\”,\”url\”:\”/v1/mail/list\”,\”status\”:\”200\”,\”latency\”:7,\”length\”:24,\”userId\”:9536605,\”code\”:20001}” >> /data/logs/access.log
echo “{\”server_date\”:\”2020-01-19\”,\”hostname\”:\”ip-172-31-43-24.cn-northwest-1.compute.internal\”,\”pid\”:1000,\”method\”:\”GET\”,\”clientIP\”:\”20.21.22.23\”,\”countryCode\”:\”ID\”,\”url\”:\”/v1/mail/list\”,\”status\”:\”500\”,\”latency\”:10,\”length\”:12,\”userId\”:1010001,\”code\”:10001}” >> /data/logs/error.log
sleep 10
done

fluent-bit部分在fluent-bit目录:

Dockerfile文件内容
FROM amazon/aws-for-fluent-bit:latest
ADD extra.conf /extra.conf

extra.conf文件内容
[SERVICE]
Parsers_File /fluent-bit/parsers/parsers.conf
Flush 1
Grace 30

[INPUT]
Name tail
Path /data/logs/access.log
Tag access

[INPUT]
Name tail
Path /data/logs/error.log
Tag error

[FILTER]
Name parser
Match *
Key_Name log
Parser json
Reserve_Data True

[OUTPUT]
Name firehose
Match access
region us-east-1
delivery_stream fluentbit-access

[OUTPUT]
Name firehose
Match error
region us-east-1
delivery_stream fluentbit-error #要在firehouse里面设置

—————-
报错信息/ecs/firelens-sample/文件夹下面ecs/log_router/03aafe7fa1f4452d862854b33311190f

打镜像步骤:
cd fleuntbit
docker build -t fleuntbit:v3 .
docker tag fleuntbit:v3 402097323/fleuntbit:v3
docker push 402097323/fleuntbit:v3

dockerup账号******/******

taskdef.json:
{
“family”: “firelens-sample”,
“taskRoleArn”: “arn:aws:iam::254278701124:role/ecsTaskExecutionRole”,
“executionRoleArn”: “arn:aws:iam::254278701124:role/ecsTaskExecutionRole”,
“containerDefinitions”: [
{
“essential”: true,
“name”: “log_router”,
“image”: “402097323/fleuntbit:v3”,
“logConfiguration”: {
“logDriver”: “awslogs”,
“options”: {
“awslogs-create-group”: “true”,
“awslogs-group”: “/ecs/firelens-sample”,
“awslogs-region”: “us-east-1”,
“awslogs-stream-prefix”: “ecs”
}
},
“mountPoints”: [
{
“sourceVolume”: “data”,
“containerPath”: “/data”,
“readOnly”: false
}
],
“firelensConfiguration”: {
“type”: “fluentbit”,
“options”: {
“config-file-type”: “file”,
“config-file-value”: “/extra.conf”,
“enable-ecs-log-metadata”: “false” #如果不设置enable-ecs-log-metadata 为 false,则日志条目中会包括以下元数据信息:1、ecs_cluster任务所属的集群的名称;2、ecs_task_arn容器所属的任务的完整ARN;3、ecs_task_definition任务正在使用的任务定义名称和修订;4、ec2_instance_id容器托管于的 Amazon EC2实例ID。此字段仅对使用EC2启动类型的任务有效
}
}
“user”: “0”,
},
{
“essential”: true,
“name”: “myapp”,
“image”: “402097323/applog:v2”,
“logConfiguration”: {
“logDriver”: “awsfirelens”
}
“mountPoints”: [
{
“sourceVolume”: “data”,
“containerPath”: “/data”,
“readOnly”: false
}
],
“dependsOn”: [
{
“containerName”: “log_router”,
“condition”: “START”
}
],

}
],
“cpu”: “256”,
“memory”: “512”
“volumes”: [
{
“name”: “data”,
“host”: {}
}
],
“compatibilities”: [
“EC2”,
“FARGATE”
],
“requiresCompatibilities”: [
“FARGATE”
],
“networkMode”: “awsvpc”
}

创建 Amazon Kinesis Firehose 传输流和 S3 存储桶,S3对应存储桶如图,要设置S3存储桶权限为允许公开,ref:使用 AWS FireLens 轻松实现 AWS Fargate 容器日志处理 | 亚马逊AWS官方博客