amazon web services - awslogs agent can't keep up -
i'm running awslogs agent on server, , when in cloudwatch logs in aws console, logs 60 minutes behind. our server produces 650mb of data per hour, , appears agent not able keep up.
here our abbreviated config file:
[application.log] datetime_format = %y-%m-%d %h:%m:%s time_zone = utc file = var/output/logs/application.json.log* log_stream_name = {hostname} initial_position = start_of_file log_group_name = applicationlog [service_log] datetime_format = %y-%m-%dt%h:%m:%s time_zone = utc file = var/output/logs/service.json.log* log_stream_name = {hostname} initial_position = start_of_file log_group_name = servicelog
is there common way speed of awslogs agent?
the amount of data (> 0.2mb/s) not issue agent. agent has capacity of 3mb/s per log file. however, if you're using same log stream multiple log files, agents write same stream, , end blocking each other. throughput more halves when share stream between log files.
also, there a few properties can configured may have impact on performance:
buffer_duration = <integer> batch_count = <integer> batch_size = <integer>
to solve issue, did 2 things:
- drastically increase batch size (defaults 32768 bytes)
- use different log stream each log file
and agent had no problems keeping up. here's final config file:
[application.log] datetime_format = %y-%m-%d %h:%m:%s time_zone = utc file = var/output/logs/application.json.log* log_stream_name = {hostname}-app initial_position = start_of_file log_group_name = applicationlog batch_size = 524288 [service_log] datetime_format = %y-%m-%dt%h:%m:%s time_zone = utc file = var/output/logs/service.json.log* log_stream_name = {hostname}-service initial_position = start_of_file log_group_name = servicelog batch_size = 524288
Comments
Post a Comment