APIs are driving today’s digital world as around 89% of developers deploy miscellaneous APIs at service for application development. While RESTful and SOAP API are the headline makers, Shadow APIs are troublemakers, if they are not properly managed and watched over. Often remaining neglected and documented, shadow APIs can produce chaos beyond one’s imagination. In this post, let’s figure out the need and ways to secure shadow APIs and keep nuisances under control.
Let’s begin with knowing the Shadow API.
Shadow APIs are non-documented and non-tracked third-party APIs used by organization/developers. These are so abstruse and discreet that developers often remain oblivious of their presence and qualities.
Before one aims at discovering ways to secure Shadow APIs, knowing the expertise of locating Shadow APIs is crucial as these APIs remain clandestine.
The strategies listed below will help in differentiating between Shadow API from other APIs:
One of the most extensively used approaches for spotting Shadow API is logged analysis wherein developers closely monitor the real-time application logs. The majority of organizations have already adopted a logging solution for this job.
Taking up application logs leads to immediate API issue detection and real-time remedial solution. If deployed precisely, the log analysis approach will aid developers in instantaneous identification of freshly developed end-points, response data, and various other API metrics.
Log analysis tools like Sematext Logs, insights, and InsightOps are worth a try.
Many jobs are automated with the help of these tools. But, it’s not always the best bet as log analysis demands huge storage space to log the data. One also needs to spend on data parsing tools. If anything goes wrong, log analysis can either store too much or too little information.
Also, the majority of log analysis tools are devised to store logging details of activities occurring at the application level. HTTP request logging does not happen anywhere. This is a lacuna in log analysis as detecting the API changes becomes tedious.
The loopholes of log analysis can be overshadowed by monitoring all the API requests. With constant live monitoring, API categorization becomes a doable job for the freshly added data.
Without keeping developers hooked for performing tests, this approach detects performance glitches straightaway. This saves huge time and effort. One more added advantage that live monitoring holds over any other Shadow API detecting means is its ability to integrate with API gateway straightaway or using the already-deployed app agent.
Out of these two methods, the first one is preferred over the latter one as it’s simple in implementation and keeps any unforeseen security concerns at bay.
But, its usage will degrade the app performance as the information will be captured either locally or using the third-party app. In each case, the app will slow down a bit. Additionally, this approach doesn’t have a centralized implementation. It has to be implemented on each application separately. It’s too much work to do.
Often known as static code analysis, code scanning refers to tracking the application source code to map the API usage. It’s different from other approaches as it recognizes the Shadow API before it is produced. As it’s not monitored or introduced while the application is live, it’s considered a non-invasive technique.
It’s a very holistic approach and doesn’t demand tedious integration at every stage. Its usage makes API development meet GDPR compliance effortlessly. The only downside of this technique is the complexity involved. Codes are hard to build and the tools used should be compatible with every single stack language used.
Lastly, you can try outbound proxies and trading platforms' APIs for finding a Shadow API. The job of API proxies is to decode outward-bound API calls and direct them using their own service. While this action takes place, outbound proxies log all the API responses and requests. Trading platform APIs are the same as outbound proxies. The only difference is that in trading platforms developers behave like an intermediary.
The biggest benefit of using these two ways out is that automatically API usage catalog is created. So, developers don’t have to put effort into this job and they get to know about the API usage consumption from the very beginning. But, this approach demands explicit use of proxy APIs in the codebase. As traffic has to pass through proxy APIs, the actual API performance falls a step below.
From the above text, one might comprehend that identifying the Shadow API is a tedious task. Still, one is bound to spot them and get rid of them because of the below-mentioned reasons:
OWASP Top 10 API list also acknowledges Shadow APIs.
In fact, experts have added them as one of many biggest security threats in the API9:2019 Improper Assets Management list. Businesses that fail to get rid of Shadow APIs in time are likely to face dire consequences. Facebook is a live example of this.
Its password update service started sending a 10-digital code to the users’ phone numbers. This wasn’t what the developer team expected and the team soon realized that it was a security loophole that will give hackers a chance to decode the code. Though the issue was resolved by introducing rate-limiting, it managed to create temporary chaos.
Facebook is technically sound and has a team of IT experts who took care of the issue at the infancy stage. Not every business is that fortunate and can face endless hassles because of the presence of Shadow API.
The damage done by a Shadow API is way beyond the repair if immediate and appropriate steps are not taken up. Gladly, there are a couple of viable practices that will save your APIs from falling into the shadow pit.
There is certain predefined API standards adherence to which leads to minimal API anomalies. OpenAPI Specification standard helps developers to understand the purpose of API from the very beginning. It also offers a standard programming language for machines and humans. Adherence to these universal standards allows developers to keep issues as minimum as possible.
By automating the API documentation at an early stage, developers can easily save huge effort invested in manual documentation. API documentation updates can be incorporated with the help of the CI/CD process.
Before the new API version is released, API security experts should take some time out to analyze the security condition of the first-born APIs. Doing so helps in assessing the unforeseen security risks in the new version and making it more secure.
While automatic API documentation is a real savior, it can’t do wonders alone. The API security experts need to delve deep into the details like API endpoints, their modus operandi, and live position. The continual API inventory monitoring helps developers to eradicate the possibilities of Shadow APIs.
Even if one tries extensively, some old or outdated APIs will remain active. With the help of the backporting process, developers can easily introduce updated security practices in the older APIs as well and make them technically sound to bear any security attacks.
CORS or Cross-Origin Resource Sharing is a high-end variant where a particular API is allowed to access another API. With this modern approach, it’s easier to reduce API compromise incidents at the hands of hackers.
API protection, continual monitoring, and better management are some of the most viable ways to keep the occurrence of Shadow APIs under control. However, it’s easier said than done as many complexities are involved in the job.
Wallarm brings all the needed expertise to a centralized place and makes the job easier. Its high-end API security platform makes it possible for you to work flawlessly.
While API development is happening, your API developers will have to deploy best practices and approaches when Wallarm is in use. They won’t let APIs lay dormant and create issues for you. So, hand over the task of securing the APIs to Wallarm and have ultimate peace of mind.
Subscribe for the latest news