A10: Insufficient logging and monitoring 2017 OWASP
OWASP, API Security, WAF
A10: Insufficient logging and monitoring 2017 OWASP
Insufficient logging and monitoring is in the Top 10 OWASP for many different reasons. Not only is it hard to detect but it’s also hard to protect from. There are several ways we can protect ourselves from this vulnerability but we need to talk about what the vulnerability entails first.
What is Insufficient Logging & Monitoring?
Besides not logging enough log entries when events occur, this issue also entails the amount of detail that is logged as we should make sure we can trace back anything required in the event of an unwanted occurrence such as a cyberattack. Some common things we can think about are login, logout, requests and responses that are important to business users and things related to limited resources such as wallets.
Of course it’s not only about what is logged but also how it interacts with the system. If a log entry is made with the wrong characters it might cause the log entry to break the integrity of the logs. This is also known as log injection or poisoning.
Of course we need to also ensure sufficient monitoring is put in place to safeguard the application. After all, there is no use in logging things that do not get monitored. This goes further than just monitoring the logs of course, we need to monitor everything. This also includes APIs and connections to third party applications.
Make sure the logging is all secure and that malicious actors can not easily access it by replacing default passwords and locating the system in a secure location internally.
How to detect Insufficient Logging and Monitoring
Detecting this vulnerability is definitely not an easy task as it will require a good inventory system that keeps track of not only what hardware is available in the system but also what software with their important flows that matter to the business stakeholders. Communication is certainly not an easy task and will continue to be a hurdle for many companies so actually expecting so many teams to work together is hard without proper oversight. This system needs to be centralized and managed by 1 instance within the company that regularly provides updates to the system.
It is important to also investigate new vulnerabilities and CVEs as they arise since they might affect the organization. This can be narrowed down to investigate the components that are running without our organization, for example we can go to exploit db and search for “microsoft” and see that there are many practical vulnerabilities still being discovered quite often.
Of course we can perform our monitoring with the use of tools but with so many out on the market. Which do you pick?
Nagios: This open source tool is excellent for keeping an eye on your logs and making sure they do not contain log entries that could indicate trouble. There can be some false positives but the fact it is open source and can be very flexible in certain areas do make up for this.
SNORT: This is what’s known as an IDS and IPS , these systems are means to detect and prevent intruders from completing their mission. It comes with a very solid base ruleset that you can expand on or adept to your needs and will take care of monitoring the logging for any attacks while notifying the responsible people if an attack would be ongoing via alerts we can set.
Splunk: This tool will help you keep a close eye on your logs by first gathering them from all over the network in all sorts of formats making it easier to manipulate the logs and data to more easily detect and identify. This can help us make logs more visual and easy to search.
OSSEC: This Free IDS will aid organizations by monitoring logs and actively alerting the responsible people in case of an attack. It will usually not act on it’s own to introduce defences but that is not the goal of an IDS. It is up to the system administrator to introduce proper preventive measures.
Tripwire: This free and open source is very good at monitoring our infrastructures as it will keep an eye on the filesystem and it will compare that to what it knows should be the baseline. This is a great way to increase detection rates and in combination with other tools, this forms a great weapon.
One notable attack scenario we can investigate can be found in a CVE where attackers seem to have the ability to pollute the logs and hide audit information from the system administrators, This is of course not good but it does not have a very high impact as we can see on the “seclists.org” website:
Magneto, a common e-commerce platform, also brought out a patch against this vulnerability type on October 8, 2019 which aims to solve the issue covered here. The vulnerability occurs because administrators do not have their actions properly logged. More information can be found here:
How to Prevent Insufficient Logging and Monitoring
Prevention of this vulnerability type depends in big part on how diligent your organization is with the keeping of what should be logged and what not and how diligently those logs are monitored. We can use tools to aid us in the process but we need to assure that our software is designed for optimal logging as well. Besides creating those logs we need a good way to monitor them. Let’s describe some tips to help you prevent this prevalent security issue before we go into some more tooling.
We need to ensure that we log at least all of the following actions: Authentication and authorization related events, any events related to limited resources such as wallets and business critical events. While doing so we need to ensure we log sufficient data to ensure that monitoring is possible later on such as user contexts, events at the time, endpoints related and error messages.
The logs that we generate should be easy to aggregate and consolidate in a centralized system while being easy to be consumed by log management tools.
For really critical processes, we need to ensure that audit logging is enabled and that we can trace the calls from 1 specific user. If their traffic gets lost among the haystack of other traffic while performing an attack, they are still very obfuscated making it very hard to find out what is going on.
When the logging component has been put in check, we need to enable monitoring on these logs and we need to ensure that the proper stakeholders are notified when an event occurs.
There are ways to manage any possible incident that have been laid out for us, so we should at least consider them and set up an incident response plan in case we need to respond so that organisations do not have to improvise when the heat is on. One of these examples can be found over at https://csrc.nist.gov/publications/detail/sp/800-61/rev-2/final and has been created by NIST. It’s very comprehensive and covers most if not all attack scenarios.
We should use tools to help us congregate and manage logs. We will look at some of these in the next section.
Log data management tools
Nlog: This comprehensive, flexible and open source tool for various .net platforms will help you create easy to search through logging by enabling you to write to multiple platforms such as databases, files and the console. You can change your logging on the fly using this tool which adds all the more power to your defensive arsenal.
Httpry: This fully free packet sniffing tool is designed to grab and display logging. Even while being in it’s beta version (0.1.8 at the time of writing) it offers many powerful tools that help you prevent this issue.
OpenAudIT: This configuration management tool is designed to give you magnificent insights into your network by showing you what is on it and the configuration it is running. Besides this it also keeps an eye on changes in configuration and propagates them immediately after detection.
Logging and Monitoring Tips
Log entries might help you discover an attack before it’s too late, make sure you log a full audit trail on high security transactions such as authentication and authorization. Also have full audit trials on business critical flows.
Good logging is useless without someone watching it so make sure you have proper monitoring installed alongside.
Know what is on your network and aggregate your logs so you can search them with a tool such as elasticsearch while creating proper visualizations and dashboards with a tool like kibana.
It can cost your organization tremendous amounts of money if your logging component or the monitoring one is not sufficiently set up without you even knowing it. This problem adds time to debugging code and in the event of an attack it can be detrimental to an organization's reaction speed.
Work with tools to integrate the human eye with the diligence of a robot. This will allow you to cover a much broader area and tools have evolved to great assets these days.
While this vulnerability type may not seem like a great issue at first, it can be a great issue at the time when you need it most. In the event of an attack or critical bug, you have no way of properly debugging or your ability might be severely hindered so it does really pay off to invest in proper logging and monitoring. It will greatly help the passive web application, server infrastructure and API security you have in place.