aws lambda
by Yan Cui
崔燕
During the execution of a Lambda function, whatever you write to stdout (for example, using console.log
in Node.js) will be captured by Lambda and sent to CloudWatch Logs asynchronously in the background. And it does this without adding any overhead to your function execution time.
在执行Lambda函数期间,您写入stdout的任何内容(例如,使用Node.js中的console.log
)都会被Lambda捕获,并在后台异步发送到CloudWatch Logs。 这样做不会增加函数执行时间的开销。
You can find all the logs for your Lambda functions in CloudWatch Logs. There is a unique log group for each function. Each log group then consists of many log streams, one for each concurrently executing instance of the function.
您可以在CloudWatch Logs中找到Lambda函数的所有日志。 每个功能都有一个唯一的日志组。 每个日志组则由许多日志流组成,每个并发执行该功能的实例一个。
You can send logs to CloudWatch Logs yourself via the PutLogEvents operation. Or you can send them to your preferred log aggregation service such as Splunk or Elasticsearch.
您可以自己通过PutLogEvents操作将日志发送到CloudWatch Logs。 或者,您可以将它们发送到首选的日志聚合服务,例如Splunk或Elasticsearch。
But, remember that everything has to be done during a function’s invocation. If you make additional network calls during the invocation, then you’ll pay for that additional execution time. Your users would also have to wait longer for the API to respond.
但是,请记住, 在函数调用期间必须完成所有操作 。 如果您在调用期间进行了其他网络调用,则需要为该额外的执行时间付费。 您的用户还必须等待更长的时间才能使API响应。
These extra network calls might only add 10–20ms per invocation. But you have microservices, and a single user action can involve several API calls. Those 10–20ms per API call can compound and add over 100ms to your user-facing latency, which is enough to reduce sales by 1% according to Amazon.
这些额外的网络调用每次调用可能只会增加10–20ms。 但是您拥有微服务,单个用户操作可能涉及多个API调用。 根据Amazon的说法,每个API调用需要10-20毫秒的时间,这会使您面对用户的延迟加重并增加100毫秒以上,这足以使销售量减少1% 。
So, don’t do that!
所以,不要那样做!
Instead, process the logs from CloudWatch Logs after the fact.
相反,请在事实之后处理CloudWatch Logs中的日志。
In the CloudWatch Logs console, you can select a log group and choose to stream the data directly to Amazon’s hosted Elasticsearch service.
在CloudWatch Logs控制台中,您可以选择一个日志组,然后选择将数据直接流式传输到Amazon托管的Elasticsearch服务。
This is very useful if you’re using the hosted Elasticsearch service already. But if you’re still evaluating your options, then give this post a read before you decide on the AWS-hosted Elasticsearch.
如果您已经在使用托管的Elasticsearch服务,这将非常有用。 但是,如果您仍在评估选项,则在决定由AWS托管的Elasticsearch之前,请阅读此文章 。
You can also stream the logs to a Lambda function instead. There are even a number of Lambda function blueprints for pushing CloudWatch Logs to other log aggregation services already.
您也可以将日志流传输到Lambda函数。 甚至还有许多Lambda功能蓝图,用于将CloudWatch Logs推送到其他日志聚合服务。
Clearly this is something a lot of AWS’s customers have asked for.
显然,这是许多AWS客户所要求的。
You can use these blueprints to help you write a Lambda function that’ll ship CloudWatch Logs to your preferred log aggregation service. But here are a few more things to keep in mind.
您可以使用这些蓝图来帮助您编写Lambda函数,该函数会将CloudWatch Logs运送到您首选的日志聚合服务。 但是,还有几件事要牢记。
Whenever you create a new Lambda function, it’ll create a new log group in CloudWatch logs. You want to avoid a manual process for subscribing log groups to your log shipping function.
每当您创建新的Lambda函数时,它将在CloudWatch日志中创建一个新的日志组。 您希望避免手动将日志组订阅到日志传送功能的过程。
Instead, enable CloudTrail, and then setup an event pattern in CloudWatch Events to invoke another Lambda function whenever a log group is created.
相反,启用CloudTrail,然后在CloudWatch Events中设置事件模式以在创建日志组时调用另一个Lambda函数。
You can do this one-off setup in the CloudWatch console.
您可以在CloudWatch控制台中进行一次性设置。
If you’re working with multiple AWS accounts, then you should avoid making the setup a manual process. With the Serverless framework, you can setup the event source for this subscribe-log-group
function in the serverless.yml
.
如果您使用多个AWS账户,则应避免手动进行设置。 使用Serverless框架,您可以在serverless.yml
中为该serverless.yml
subscribe-log-group
函数设置事件源。
Another thing to keep in mind is that you need to avoid subscribing the log group for the ship-logs
function to itself. It’ll create an infinite invocation loop and that’s a painful lesson that you want to avoid.
要记住的另一件事是, 您需要避免 为自身 的 ship-logs
功能 订阅日志组 。 这将创建一个无限循环调用 ,这就是要避免一个惨痛的教训。
One more thing.
还有一件事。
By default, when Lambda creates a new log group for your function, the retention policy is set to Never Expire
. This is overkill, as the data storage cost can add up over time. It’s also unnecessary if you’re shipping the logs elsewhere already!
默认情况下,当Lambda为您的功能创建新的日志组时,保留策略将设置为Never Expire
。 这太过分了,因为随着时间的推移, 数据存储成本可能会增加。 如果您已经将日志发送到其他地方,则也没有必要!
We can apply the same technique above and add another Lambda function to automatically update the retention policy to something more reasonable.
我们可以应用上面相同的技术,并添加另一个Lambda函数以将保留策略自动更新为更合理的方法。
If you already have lots of existing log groups, then consider writing one-off scripts to update them all. You can do this by recursing through all log groups with the DescribeLogGroups API call.
如果您已经有很多现有的日志组,请考虑编写一次性脚本来更新它们。 你可以做到这一点递归通过与所有的日志组DescribeLogGroups API调用。
If you’re interested in applying these techniques yourself, I have put together a simple demo project for you. If you follow the instructions in the README and deploy the functions, then all the logs for your Lambda functions would be delivered to Logz.io.
如果您有兴趣亲自应用这些技术,那么我为您准备了一个简单的演示项目 。 如果您按照自述文件中的说明进行操作并部署这些功能,则Lambda函数的所有日志都将传递到Logz.io。
翻译自: https://www.freecodecamp.org/news/how-to-implement-log-aggregation-for-aws-lambda-ca714bf02f48/
aws lambda