Our next-gen architecture is built to help you make sense of your ever-growing data. Watch a 4-min demo video!

7 JSON Logging Tips That You Can Implement

  • Chris Cooney
  • October 26, 2021
Share article
JSON logging tips

When teams begin to analyze their logs, they almost immediately run into a problem and they’ll need some JSON logging tips to overcome them. Logs are naturally unstructured. This means that if you want to visualize or analyze your logs, you are forced to deal with many potential variations. You can eliminate this problem by logging out invalid JSON and setting the foundation for log-driven observability across your applications. 

Monitoring logs in JSON is powerful, but it comes with its own complications. Let’s get into our top JSON logging tips and how they can help.

1. JSON Validation

Of all the JSON logging tips in this document, this one is the least often implemented. Once you’re producing JSON in your logs, you’ll need to ensure that the JSON you’re generating can be validated. For example, if you want to run your logs through some analysis, you must ensure that a given field exists. If it does not exist, the log can be rejected or simply skipped. 

This validation stage helps to ensure that the data you’re analyzing meets some basic requirements, which in turn makes your analysis simpler. The most common approach to validating a JSON object is to use a JSON schema, which has been implemented in dozens of languages and frameworks.

2. Add Context to your JSON Logging Statements

A log statement can announce that a given event has happened, but it can be challenging to understand its significance even without context. Context gives the bigger picture, which helps you draw connections between many log statements to understand what is going on.

For example, if you’re logging out a user’s journey through your website, you may wish to include a sessionId field to make it easy to understand that each event is part of the same session.

{
      level: "INFO",
      message: "User has added Item 12453 to Basket",
      sessionId: "SESS456",
      timestamp: 1634477804
}

3. Remove Whitespace from your JSON Logs

When logging out in JSON, you need to ensure that you’re not filling up valuable disk space with whitespace. More often than not, this whitespace will not add much readability to your logs. It’s far better to compress your JSON logs and focus on using a tool to consume and analyze your data. This may seem like one of the more obvious JSON logging tips, but in reality, people regularly forget it. 

4. Use Logging Levels and Additional Fields

As well as contextual information, you need some basic information in your logs. This is one of the most often ignored JSON logging tips. For example, you can include log levels in your JSON logs. Log levels can be set to custom values, but the standard convention is to use one of the following: TRACE, DEBUG, INFO, WARN, ERROR, and FATAL. Sticking to these conventions will simplify your operational challenge since you can look for any log line worse than ERROR across all of your applications if you need to solve serious incidents quickly. 

In addition, you can include custom values in your JSON logs, such as appName or hostName. Working out which fields you need will be a little trial and error, but a log line that includes some basic fields about the logging source will make it far easier to zone in on the logs from a single microservice.

5. Log Errors And Behaviours

It’s tempting only to write logs whenever there are errors, but as we’ve seen above, logs are so much more potent than simply an error indicator. Logs can provide a rich understanding of your system and are a cornerstone of observability. You may wish to write logs for all sorts of critical events:

  • Whenever a user logs onto your site
  • Whenever a user purchases via your website, and what they’ve purchased
  • The duration a given user request took to complete in milliseconds

But how much should you log?

6. Plan for the Future

How much should you log? The question is a tricky one, but there is a basic truth. Provided you can query your logs effectively, you’re never going to be stuck because you have too much data. Conversely, if you don’t have enough data, there’s nothing you can do about it. 

It is far better to have too much data and need to cut through the noise, which should inform your decision-making when it comes to your JSON logs. Be generous with your logs, and they will be kind to you when you’re troubleshooting a critical issue. Logs can capture a wide array of different events, and remember, if it’s not an important event, you can always set it to DEBUG or TRACE severity and filter it out in your analysis.

7. The Most important JSON Logging Tip: Start with JSON logging

If you’re looking at an empty project, it is far easier to begin with JSON logs than it is to fit JSON logging into your application retrospectively. For this reason, you may wish to consider starting with JSON logging. To make this easier, you can try:

  • Creating a boilerplate for new applications that come with JSON logging by default
  • Install a logging agent on your servers that automatically wrap log lines in JSON so that even if an application starts logging out raw strings, all your logs are still JSON.
  • Make use of libraries like MDC to ensure that specific values are always present in your JSON logs.

So what’s next with your JSON logging?

JSON logging is the start. Once you’ve got some solid JSON logs to work with, you can begin to visualize and analyze your logs in ways that were simply impossible before. You’ll be able to make use of Kibana to render information about your logs, drive alerts based on your JSON logs, and much more.

JSON logs will give you unprecedented insight into your system, and this will enable you to catch incidents sooner, recover faster, and focus on what’s important.

Observability and Security
that Scale with You.