How to Troubleshoot AWS Lambda Log Collection in Coralogix
AWS Lambda is a serverless compute service that runs your code in response to events and automatically manages the underlying compute resources for you. The code…
Whether you are just starting your observability journey or already are an expert, our courses will help advance your knowledge and practical skills.
Expert insight, best practices and information on everything related to Observability issues, trends and solutions.
Explore our guides on a broad range of observability related topics.
When teams begin to analyze their logs, they almost immediately run into a problem and they’ll need some JSON logging tips to overcome them. Logs are naturally unstructured. This means that if you want to visualize or analyze your logs, you are forced to deal with many potential variations. You can eliminate this problem by logging out invalid JSON and setting the foundation for log-driven observability across your applications.
Monitoring logs in JSON is powerful, but it comes with its own complications. Let’s get into our top JSON logging tips and how they can help.
Of all the JSON logging tips in this document, this one is the least often implemented. Once you’re producing JSON in your logs, you’ll need to ensure that the JSON you’re generating can be validated. For example, if you want to run your logs through some analysis, you must ensure that a given field exists. If it does not exist, the log can be rejected or simply skipped.
This validation stage helps to ensure that the data you’re analyzing meets some basic requirements, which in turn makes your analysis simpler. The most common approach to validating a JSON object is to use a JSON schema, which has been implemented in dozens of languages and frameworks.
A log statement can announce that a given event has happened, but it can be challenging to understand its significance even without context. Context gives the bigger picture, which helps you draw connections between many log statements to understand what is going on.
For example, if you’re logging out a user’s journey through your website, you may wish to include a sessionId field to make it easy to understand that each event is part of the same session.
{
level: "INFO",
message: "User has added Item 12453 to Basket",
sessionId: "SESS456",
timestamp: 1634477804
}
When logging out in JSON, you need to ensure that you’re not filling up valuable disk space with whitespace. More often than not, this whitespace will not add much readability to your logs. It’s far better to compress your JSON logs and focus on using a tool to consume and analyze your data. This may seem like one of the more obvious JSON logging tips, but in reality, people regularly forget it.
As well as contextual information, you need some basic information in your logs. This is one of the most often ignored JSON logging tips. For example, you can include log levels in your JSON logs. Log levels can be set to custom values, but the standard convention is to use one of the following: TRACE, DEBUG, INFO, WARN, ERROR, and FATAL. Sticking to these conventions will simplify your operational challenge since you can look for any log line worse than ERROR across all of your applications if you need to solve serious incidents quickly.
In addition, you can include custom values in your JSON logs, such as appName or hostName. Working out which fields you need will be a little trial and error, but a log line that includes some basic fields about the logging source will make it far easier to zone in on the logs from a single microservice.
It’s tempting only to write logs whenever there are errors, but as we’ve seen above, logs are so much more potent than simply an error indicator. Logs can provide a rich understanding of your system and are a cornerstone of observability. You may wish to write logs for all sorts of critical events:
But how much should you log?
How much should you log? The question is a tricky one, but there is a basic truth. Provided you can query your logs effectively, you’re never going to be stuck because you have too much data. Conversely, if you don’t have enough data, there’s nothing you can do about it.
It is far better to have too much data and need to cut through the noise, which should inform your decision-making when it comes to your JSON logs. Be generous with your logs, and they will be kind to you when you’re troubleshooting a critical issue. Logs can capture a wide array of different events, and remember, if it’s not an important event, you can always set it to DEBUG or TRACE severity and filter it out in your analysis.
If you’re looking at an empty project, it is far easier to begin with JSON logs than it is to fit JSON logging into your application retrospectively. For this reason, you may wish to consider starting with JSON logging. To make this easier, you can try:
JSON logging is the start. Once you’ve got some solid JSON logs to work with, you can begin to visualize and analyze your logs in ways that were simply impossible before. You’ll be able to make use of Kibana to render information about your logs, drive alerts based on your JSON logs, and much more.
JSON logs will give you unprecedented insight into your system, and this will enable you to catch incidents sooner, recover faster, and focus on what’s important.
AWS Lambda is a serverless compute service that runs your code in response to events and automatically manages the underlying compute resources for you. The code…
Every Java developer should follow coding standards and best practices to develop secure Java code. It is critical your code is not vulnerable to exploits or…
GraphQL is an open-source query and manipulation language to use for APIs. It contains server-side functionality and a query language for maintaining data interfaces. It was…