Ingress controller vs. API gateway: A comparison

Choosing between an ingress controller and an API gateway to manage access and route traffic to backend APIs can be challenging. Both options offer overlapping functionalities but serve distinct purposes. This post clarifies how these two solutions differ and when to leverage each. Spoiler alert: you may need both!

What is an ingress controller?

An ingress controller is a Kubernetes-native component designed to seamlessly route HTTP/HTTPS and other TCP traffic from the outside world (also known as north-south traffic) to the correct backend service running inside the Kubernetes cluster.

Since external components lack the context to determine which pod or container should handle a request, an internal component—the ingress controller—is necessary. Essentially, an ingress controller translates Ingress resources into routing rules that reverse proxies can recognize and implement.

You specify ingress rules in a manifest file and apply them to your cluster with kubectl, but you'll need to deploy an ingress controller to enforce these rules.

Ingress controllers offer several features to shape the traffic that reaches your upstream service.

Traffic routing and load balancing

Ingress controllers manage north-south traffic from outside the cluster, directing it to the appropriate pods within the platform. They provide single port-IP access, allowing a single DNS hostname or IP address to reach all services in your cluster. Traffic is routed based on criteria such as domain names, URL paths, and other factors like the current load on each pod.

HTTP and HTTPS routing

Ingress controllers expose resources over HTTP and HTTPS. They control the routes through rules defined in the Ingress resource, which allows for sophisticated routing capabilities. While not originally intended in the specification, many ingress controllers now also support TLS and TCP traffic routing through custom annotations.

For example, you can define rules to direct traffic based on specific domain names or subdomains, ensuring that the correct backend services handle requests for different parts of your application. Additionally, you can use URL path-based routing to send requests to multiple services depending on the URL path specified by the client.

SSL/TLS termination

Ingress controllers handle SSL/TLS termination, which involves encrypting and decrypting requests to ensure secure communication between clients and services. This process centralizes certificate management in a single location rather than individual pods, simplifying security and maintenance.

Name-based virtual hosting 

Name-based virtual hosting allows an ingress controller to route requests to different services within a Kubernetes cluster based on the hostname specified in the request. With name-based virtual hosting, a single ingress controller can manage traffic for multiple hostnames, each potentially directing to different backend services. For example, you can route requests to api.example.com to one service and app.example.com to another. 

Conversely, name-based virtual routing allows clients to access a single backend service from multiple domains or subdomains. For example, you can route traffic from customer1.example.com and customer2.example.com to the same backend service, providing each customer with a custom white-labeled domain for the same service. 

Dynamic reconfiguration 

Ingress controllers can update routing rules immediately as pods are added or removed from the cluster, always directing traffic to healthy backend services. The ingress controller constantly monitors the state of the pods in the cluster and automatically adjusts the load balancing to distribute traffic evenly among the active pods. 

Authentication and authorization

Many ingress controllers offer robust authentication and authorization capabilities. They integrate with third-party IdPs who provide SSO, OAuth, OpenID Connect, LDAP, and SAML support. Some ingress controllers enhance security by supporting mutual TLS (mTLS), requiring both client and server to authenticate each other with certificates, thus ensuring a highly secure communication channel. mTLS provides an additional layer of security by ensuring that only trusted clients can communicate with the services.

What is an API gateway?

An API gateway encompasses a suite of use cases and features, typically fulfilled by load balancers, reverse proxies, or ingress controllers, rather than being a standalone piece of software. Some vendors provide specific API gateway software packages, but you might already have an application that can function as an API gateway.

For instance, if you're running on Kubernetes, you can set up an API gateway using ngrok’s ingress controller. However, you would need to run the ngrok agent or one of our SDKs to configure an API gateway for services running on different infrastructure, such as EC2 instances.

An API gateway accepts requests, applies rules, and forwards authorized traffic to backend APIs. It serves as the entry point for external clients to access your application. 

Regardless of the type of technology, it must implement these fundamental use cases to function effectively as an API gateway.

Request routing and composition

API gateways route incoming requests to the appropriate backend services based on properties like the request path, HTTP headers, and query parameters. They can provide granular method-level routing to give you more control over your API endpoints. 

An API gateway can deconstruct a request, call multiple backend APIs, and reconstruct a single response to send back to the client, thereby abstracting the complexity of numerous backend APIs from the client application. API gateways hide this complexity by managing access, traffic, and translation, allowing developers to focus on building services while providing clients with a more streamlined integration experience.

Protocol translation

API gateways facilitate seamless communication between clients and backend services by translating between protocols. For instance, an API gateway can convert an incoming SOAP request into a REST request before sending it to the upstream service and vice versa. This capability enables smooth integration between modern microservices and legacy systems that rely on older protocols.

Authentication and authorization

API gateways play a crucial role in enforcing security policies, including authentication, authorization, and protection against various attacks. They act as gatekeepers, ensuring that only legitimate traffic reaches your backend services.

API gateways can authenticate incoming requests using various methods, such as JWTs (JSON Web Tokens) or API keys. This type of authentication ensures that only verified users or systems can access the APIs. JWTs are particularly useful for stateless authentication, as they include user identity and claims information in a secure, compact format.

Rate limiting and circuit breaking

Rate limiting is an essential feature of an API gateway designed to control the number of requests a client can make in a given period. This mechanism helps prevent any single client from overwhelming the system with too many requests, ensuring fair usage and maintaining the performance and availability of your APIs. API gateways typically allow you to configure rate limits based on various criteria, such as the IP address, API key, or user account. This flexibility lets you tailor rate limiting policies to different use cases and user groups.

Additionally, many API gateways offer circuit breaker policies or plugins that allow them to monitor the health of your backend services and reject requests if your system becomes overloaded based on thresholds you set in advance. 

Load balancing and scalability 

API gateways improve application scalability and availability by distributing incoming requests across multiple instances. They can continuously monitor the health of each server and block or redirect traffic from failing instances and those with high latency to prevent cascading failures. 

Keep in mind that API gateways typically do not provide autoscaling capabilities. However, they allow you to utilize an autoscaled application effectively by balancing requests across instances and maintaining shared state.  

Caching

API gateways can cache responses to reduce the load on services and improve response times for frequently requested data. Whether or not your APIs will benefit from caching depends on how clients use them. For example, if clients frequently request the same data, caching will allow the API gateway to respond to each client without calling your API for each request, reducing the load on your service. APIs that often handle concurrent requests especially benefit from caching. On the other hand, if the responses from your API constantly change, you won’t see a performance increase with caching since the responses are typically unique. 

Monitoring and analytics 

Since they are a central access point, API gateways can provide insights into API usage, performance, and potential issues through monitoring and logging capabilities. Many API gateways integrate with third party SIEM (Security Information and Event Management) systems and can aggregate data across endpoints for better visibility and analysis of API usage.  

What’s the difference: API gateway vs ingress controller

While API gateways and ingress controllers share many functions, their purposes are distinct and complementary, not mutually exclusive. Let’s examine the differences based on scope, functionality, benefits, and limitations.

Scope

Ingress controllers are exclusive to Kubernetes. They utilize standard Kubernetes Ingress resources and implement the rules defined in those resources. API gateways, on the other hand, are platform-agnostic and not tied to any container orchestration system. 

Functionality

Key functionalities of an ingress controller include load balancing, SSL termination, and name-based virtual hosting. It’s specifically designed to route traffic between pods in a Kubernetes cluster.  Beyond routing, API gateways simplify microservices architecture by providing a unified entry point with additional capabilities. This entry point brings all interconnected API structures and services together in one place for easy accessibility. 

Benefits

The ingress controller is suitable for simple routing requirements that are hassle-free to configure and handle. It takes advantage of the Kubernetes ecosystem, which provides auto-scaling and service discovery capabilities. On the other hand, the API gateway presents a more comprehensive package that enables effective API management, including advanced security protocols, rate limiting, and analytics. It offers greater adaptability by handling traffic both within Kubernetes and outside its scope. Thus, it allows more diverse architecture options, including support for multiple protocols beyond HTTP/S. With tools like ngrok, you can get a reverse proxy, load balancer, and API gateway in one unified platform.

Limitations

While both tools handle traffic efficiently, they have their limitations. Ingress controllers are limited to Kubernetes clusters and often offer only basic authentication and load balancing. With their enhanced capabilities, API gateways can be more complex to set up and maintain and may incur higher costs. They also require additional configuration or development as your APIs evolve and expand. A significant limitation of an API gateway is that it can’t run standalone within your Kubernetes cluster. 

Ingress controller vs API gateway: One or both?

You need an ingress controller to use a Kubernetes cluster in a production environment. The question is—what other capabilities do you need?

If your ingress controller does not support the API gateway use cases, putting a separate API gateway “in front” of your ingress controller might be a good idea. The advanced traffic management, caching, robust security, protocol management, and other features will improve performance and simplify system management and maintenance. Better yet, deploy an ingress controller—such as ngrok’s—that provides the API gateway features, and reduce the number of tools you need to maintain. 

Sign up today to explore the capabilities of both the ngrok Kubernetes Ingress Controller and our new developer-defined API Gateway

For questions or assistance, please don’t hesitate to reach out. Connect with us on Twitter or contact us at support@ngrok.com.

Share this post
Mandy Hubbard
Mandy Hubbard is a seasoned technologist with a strong QA and developer advocacy background. She is passionate about software quality, CI/CD, good processes, and great documentation. Mandy is currently a Sr. Technical Marketing Engineer at ngrok, where she combines her technical experience and creative skills to help bring new features to customers.
API gateway
Kubernetes
Networking
Security
Other
Production