Introducing Credential Stuffing Detection
Introducing Credential Stuffing Detection
Introducing Credential Stuffing Detection
Introducing Credential Stuffing Detection
Introducing Credential Stuffing Detection
Introducing Credential Stuffing Detection
Close
Privacy settings
We use cookies and similar technologies that are necessary to run the website. Additional cookies are only used with your consent. You can consent to our use of cookies by clicking on Agree. For more information on which data is collected and how it is shared with our partners please read our privacy and cookie policy: Cookie policy, Privacy policy
We use cookies to access, analyse and store information such as the characteristics of your device as well as certain personal data (IP addresses, navigation usage, geolocation data or unique identifiers). The processing of your data serves various purposes: Analytics cookies allow us to analyse our performance to offer you a better online experience and evaluate the efficiency of our campaigns. Personalisation cookies give you access to a customised experience of our website with usage-based offers and support. Finally, Advertising cookies are placed by third-party companies processing your data to create audiences lists to deliver targeted ads on social media and the internet. You may freely give, refuse or withdraw your consent at any time using the link provided at the bottom of each page.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
/
/
DevSecOps

What is Service Discovery in Microservices? Implementation

The present application development revolves around microservices. This development approach has made application easier than ever as it breaks down complex components into small and easy-to-handle entities. Service discovery is a crucial concept related to microservices, and this post will explain it in detail.

Author
What is Service Discovery in Microservices? Implementation

Service Discovery (SD): A Quick Brief

Microservices architecture can only be called a success when all the concerned microservices are at peace while transmitting information constantly using APIs like REST or SOAP, and many more. Mostly, microservices require a highly virtual or container-based ecosystem to interact, exchange information, and remain functional.

As a wide range of containers is used & service instances are changed rapidly, effective communication between microservices is only possible when their presence is accurately determined. This seems a tedious job and is handled perfectly using Service Discovery.

With its help, microservices can quickly detect the presence of concerning instances in the ecosystem. At a very primal level, SD acts like a log or bookkeeping of instances featuring location details of every instance.

As the network path is a part of the instance location, the SD mechanism will be pivotal when a customer wants to request a service. Let’s understand why it’s required.

The Service Discovery mechanism is crucial for microservices that rely on the service location details, like port and IP address, to start a conversation. If these two details are not accessible, microservices won’t be able to locate a service. 

One must not consider finding the application service location dynamically as new service instances are constantly changed, and not every detail can be recorded. For cloud-based applications also, dynamic and manual location tracking is possible as such applications are horizontally scaled at regular intervals.

SD couples the locations and forwards them to microservices. Hence, every instance location remains at the disposal of the microservices.

 

How it Works?

The entire modus operandi of the Kubernetes Service Discovery mechanism is based on two sections. In the first section, it lets the instance get registered and highlights its presence. Secondly, it tries to find out a way so that the registered services are easy to locate.

Based on these actions, there are two key SD patterns: client-side discovery and server-side discovery. They both come with a fair share of benefits and downsides while acting in different manners. Next, we will try to understand what these two mean.

  • Client-Side Service Discovery
Client-Side Service Discovery
Client-Side Service Discovery

It ensures that the service client must find a compatible service registry (SR) if it aims to spot a service provider. It also picks the fitting open-source service instances using the load balancing algorithm to successfully forward a request. 

Here, as the service comes into action, the instance locations are recorded in the SR instantly. Once the service instances are discarded, no instance location details will remain saved. This process continues and is based on the heartbeat method. 

Wherever load balancing is used, this approach is useful to make wise decisions as it keeps the end-user updated about available instances of services. Load balancing becomes easy as clients are aware of the instances that are ready to take up extra loads. 

The first query that a client makes is directed to a central server, Discovery Server. The role of the Discovery Server is to behave like a log book or phone book for existing instances and make their abstractions possible. 

The services are often deployed behind compatible API gateways to ensure API security. If they are not accessible, clients are accountable for bringing concepts like balancing, crosscutting, and authentication into action to fill the absence of API gateways. 

Netflix OSS is a real-time example of this type. In this scenario, Netflix Eureka is used as an SR and delivers REST API that is responsible for managing the service instance integration. Also, it handled the instances that can be queried. It uses Netflix Ribbon as an IPC client for load balancing.

Now that the meaning of client-side SD is clear, let’s pay attention to the benefits that it brings to the table. 

To begin with, this is a very simple and easy-to-use approach as no rotating SR entities are its part. 

  • It grants full load balancing hold in the hands of clients. 
  • Clients can optimize the decision according to the requirements. 
  • It is capable of making load balancing fully application-specific via hashing. 
  • It doesn’t require load balancers to route the application or requests. 
  • It leads to a quick turnaround as latency and middle paths are not there

While the advantages seem highly lucrative, the approach isn’t flawless. Have a look at certain disadvantages that are part of this approach. 

  • As it joins clients with the concerning service registry, it demands different implementations for every programming language and framework at work. 
  • Its management is a bit tedious as single microservices use various frameworks, tools, programming languages, and technologies. 
  • One call isn’t enough to reach out concerning microservices. Clients have to make two calls per microservices.

  • Server-Side Service Discovery
Server-Side Service Discovery
Server-Side Service Discovery

This methodology forces clients to be fully aware of the presence of the service registry. Rather, it features a router that makes client requests. The router also searches the SR for the client. It hunts down existing instances and forwards them when they are spotted. 

Load-balancing and detecting the ideal service instances is not a concern. API gateways are used to pick the optimal end-point that can handle incoming requests. 

The key resource here is the server-side server that handles requests from the client server and forwards them to the destination. To make it happen correctly, a service location registry is maintained that is used to find the appropriate client location so that no human intervention from the client side is required. 

Even though load balancing is not used, client requests are handled just like a load balancer. Here also, service instances get automatically registered as soon as service commences. Once a service is terminated, service instances are deregistered immediately. 

Amazon Web Services’ Elastic Load Balancer is a real-time example of this Service Discovery type. ELB here is responsible to balance the incoming and internal traffic that a service receives. 

ELB permits a client to make a TCP or HTTP-based request using the DNS. Once the request is successfully made, ELB starts the traffic load balancing for EC2 containers and EC2 instances. Both these resources are directly registered with ELB and won’t ask for any additional SR. 

For specific environments like Marathon and Kubernetes, each cluster will feature a functional proxy that behaves like a server-side load balancer. The job of this load balancer is to route all the incoming requests using the port and IP addresses of the services. 

From here, the request is forwarded to an active instance inside a cluster. Why adopt this approach? Well, here are a few advantages that it delivers. 

  • Client’s involvement is very resulting in quick service instance delivery. 
  • It does not involve the client in exploring the details of finding an available service instance.
  • As it generates an abstracted connection between server and client, speed is great. 
  • It doesn’t force the service to generate discovery logic for every language and framework involved. 
  • For certain deployment ecosystems, it’s absolutely free. 
  • Only one call per service is required in this approach. 

The disadvantages of this approach are mentioned below. 

  • The setup and deployment have to be fully managed by the clients. 
  • The central server remains non-functional if load balancing isn’t applied. 
  • Clients are not allowed to pick a fitting service instance. 

What Is A Service Registry?

As one tries to understand the Service Discover meaning, knowing about the service registry is recommended as it’s a key component of this concept. It’s the database of all the network locations. 

The SD mechanism captures the network location details and stores them in the SR. To guide microservices correctly, it’s important that Service Registry is updated and easily accessible. 

Clients use SR to find the network path and start the communication. Further, server clusters using a replication protocol are also a part of a SR. 

Service Registration Options

As mentioned above, Service Discovery Microservices require an SR to store the instance's location. Its effective registration and utilization are required for seamless SD workflow. Here are two available options. 

  • Self-Registration

In this process, the whole responsibility of registration and de-registration is handled by the service instance of the SR. If more requirements exist, a service instance often shares the heartbeat requests to continue the registration.

Self-Registration

This model is preferred because of its simplicity and independence. It won’t require any other system entities to proceed. But, as it integrates the service instance into Service Registry, things become complex. It asks for registration code implementation for every used framework and language.

  • Third-party Registration

If you’re looking for an alternative, third-party registration is your next option. Here, service instances are not bearing the responsibility of registration. The responsibility is handled by an additional component like the Service Register that keeps the log of running service instances.  

To make it happen, deployment environment polling and event subscribing are required. Whenever a new service instance is detected, the location details are automatically logged. It automatically also de-registers dismissed service instances.

Third-party Registration

As it’s not coupled with Service Registry, there is no need to provide distinctive logic for the involved framework and languages.  

Service Discovery Implementations

The deployment approaches for this process are based on the efficacy of the used strategy, accessibility to the right resources, and use of the right implementation approaches. The above section explains several strategies and basic requirements. Next, let’s talk about the viable approaches.

  • DNS-based implementation

SD implementation can be done using the DNS-based approach wherein traditional DNS libraries act like a client. In this implementation category, every microservices reach the DNS zone file and carry out extensive DNS lookup to locate other microservices. If that option doesn’t work, NGINX is used to configure the microservices.  

These microservices are then used to poll the DNS, which leads to SD mechanism implementation. This is an easy-to-use approach and works with almost all the leading programming languages. There is hardly any code change required.

However, DNS usage isn’t without limitations. For instance, DNS fails to deliver real performance and fine-tunes the TTLs when different caching semantics are used for multiple clients. Also, this approach becomes a little expensive when you need to manage the zone files or add more of them.

As DNS alone is not enough, some extra resources are required to add resilience. Hence, operational overheads are further added.

  • Key/Value Store and sidecar implementation

In this implementation approach, the Consul or Zookeeper is mainly used as a key resource. Here, a sidecar is used for communication, and microservices are designed to communicate with locally-hosted proxies.

Further, an additional process is used that exchanges information with Service Discovery (SD) and plays a crucial role in proxy configuration. In real-time,

The Zookeeper was used to build SmartStack. If you use Consul, you get to enjoy a broad spectrum of interfaces to process sidecars. A real-time example of this is Stripe using HAProxy to duplicate Consul data.

Now, this approach is preferred because it renders multiple benefits, such as through-and-through code-writing transparency. 

Developers can use any programming language without being worried about the microservices interaction with each other.  But, this transparency comes with certain compromises. For instance, the sidecar has restricted access to the hosts of SD, and the proxy used isn’t capable of transacting granular entities.

Also, the sidecar usage adds up to the latency of the process forcing every microservices to take an extra jump. Lastly, the sidecar needs to be re-tuned and installed for every new microservices involved. This becomes a lot of work and processing.  

  • Specialized discovery and library/sidecar-based implementation  

In this last implementation approach, an API is provided to the developer directly. The developer then takes the help of a library like Ribbon to establish real-time communication with a specific Service Discovery solution.

Here, the developer is in direct charge, and results often come with different compromises. The approach demands full awareness of microservices coding so that API calls are made explicitly. The great thing about this approach is that it’s not resource-specific and is open to all sorts of hosts.  

The deployment process is very simple, and there is no need to deploy client libraries for every service library. The only compulsion of this methodology is that you must depend on client libraries offering a wide range of languages in a polyglot ecosystem.

Conclusion

As you switch to microservices, understanding the concept of Service Discovery is crucial. It’s what will help microservices contact others. By providing the right network path details, SD points the microservices in the right direction.

This guide explained:

  • Meaning of Service Discovery
  • Importance of this mechanism
  • How does it work
  • Meaning of Service Registry
  • Options of SR

As you plan to use microservices, it’s important to understand automated and real-time SD mechanisms. This will make service instances easy to discover, further leading to seamless microservices usage.

FAQ

References

Subscribe for the latest news

Updated:
February 26, 2024
Learning Objectives
Subscribe for
the latest news
subscribe
Related Topics