Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
This issue type entails older APIs. These APIs are developed, used and then forgotten without being removed. This brings us APIs that might not be patched so well or use older libraries. These are also known as shadow APIs referring to them operating in the background without anyone knowing
What makes this issue so prevalent is that outdated documentation is very hard to automatically check, after all you can not look for every possible API and even if you tell the firewall which APIs should exist, the server will never find these shadow APIs if they do not send out or receive any traffic. This issue is made worse by the tendency to move to microservices because it can often lead to over-exposed API endpoints which might not have to be visible to the end user.
Depending on what API endpoints are exposed, this can lead from simply overexposed data. ( See API3:2019 Excessive data exposure") to a full takeover of the server if there is an older vulnerable version available to the end user.
We should always wonder for every API if all the current endpoint should even be available and if we maybe can't do with only allowing the API to communicate internally. To aid us we can ask ourself if this API even needs to be in production at all and who should have access to it. If the specific API does need to be in production, we can also ask ourselves if we are not using an outdated version, if there is sensitive data exposed or how the data flows throughout the application and APIs.
A lack of documentation is a problem that plagues many companies and this only makes things worse because an undocumented API that does not send out traffic can remain undetected for a long time.
To ease the removal of older APIs we might consider a retirement plan for APIs that are no longer needed. Another tool we have at our disposal are inventory management systems which index every API and it's version, this can be used to perform a regular inspection of the API plan.
As long as these older APIs and their libraries remain unpatched, the system will be vulnerable to Improper Assets Management.
Example Attack Scenarios
Our first example will be a scenario that I encountered in production. I was scanning the APIs with a directory brute forcer to see if I could not find anything and unsurprisingly i could not find a thing. I noticed however that I was working on version 2 of the API from the URL:
After replacing the v2 in the URL to v1 and restarting my directory brute forcing attempts I did find an admin URL
The admin interface was protected in that it could only be accessed from the internal network but this was easily fooled by adding an "x-forwarded-from" header and setting it to 127.0.0.1.
Our second example comes from a third party vendor. For a new feature to work, the target had to include certain libraries which were using outdated APIs but due to imperfect asset management, our target did not have an overview of these APIs existence. These endpoints ultimately led to more functionality being exposed which had an RCE in it.
Preventive measures against Improper Assets Management?
We need to ensure we have a proper inventory management system set up that includes all the API endpoints with what is special about them, their environment and from which networks they are reachable. Also inventory third party integrations for these APIs, so we can have an overview of critical infrastructure that should be easy to consult by the people who need it.
We all know software is becoming more and more dynamic but with that also comes that we will have to review our inventory system on a regular basis and evaluate it.
Some interesting things to document could be:
CORS policy triggers
These days we can automatically generate the documentation from adopting a specification such as Open API. This will make it much harder to miss a rogue API, however still not impossible. This documentation should be available to users.
Besides what we just talked about, it really helps if you make use of external security measures such as an API firewall and we need to ensure that we install it everywhere that is exposed to production, not just the production APIs themselves; but also testing environments as they can sometimes be exposed to the internet, especially with the current pandemic and home work becoming more normal.
When it comes to data, it is important not to use production data on non-production environments and if it must be done, that we treat the endpoints with the same diligent standard as the production APIs.
When newer versions of an API are released, we should always ensure that a thorough risk analysis is done if that version contains security updates. We can wonder to ourselves what our next steps should be and how to implement them, for example if an update is really required, we might have to get it out to our customers immediately or if there is no client impact to leaving out the security fix, if it can not be postponed.
It is incredibly easy to lose track of your APIs and what versions you are running where. I can not stress the importance of keeping track of each one enough as often attackers will be looking for an entry point which could easily get deployed or that might even still be deployed from a long time ago. These entry points could lead to a much bigger issue so it's wise to only enable what is needed in a production environment. Do you want to secure your APIs? Consider the API security solution
20+ years IT expertise in system engineering, security analysis, solutions architecture. Proficient in OS (Windows, Linux, Unix), programming (C++, Python, HTML/CSS/JS, Bash), DB (MySQL, Oracle, MongoDB, PostgreSQL). Skilled in scripting (PowerShell, Python), DevOps (microservices, containers, CI/CD), web development (Node.js, React, Angular). Successful track record in managing IT systems.
Stepan is a cybersecurity expert proficient in Python, Java, and C++. With a deep understanding of security frameworks, technologies, and product management, they ensure robust information security programs. Their expertise extends to CI/CD, API, and application security, leveraging Machine Learning and Data Science for innovative solutions. Strategic acumen in sales and business development, coupled with compliance knowledge, shapes Wallarm's success in the dynamic cybersecurity landscape.