Developer experience in deployed API gateways: Kong vs. ngrok

May 14, 2024
|
10
min read
Joel Hans

The best APIs are designed on two guiding principles:

  • Delivering a fantastic developer experience (DX) to the API consumer.
  • Minimizing their time to first call (TTFC) using whatever means necessary.

You can calculate the latter using tactics like product analytics and surveys—or even ask your technically-minded children to give it a go—but the concept of DX is slippery at best. Does it equate purely to speed? Enabling a flow state? Reducing the back-and-forth of GitHub or Jira tickets between teams? Enabling developers to better shift-left security while not completely overwhelming them with prohibitive warnings?

Despite the fuzzy definition, every API gateway product also claims a better experience for the API developer responsible for deploying endpoints and onboarding consumers. The logic makes sense—if the API gateway is easy to work with and simplifies otherwise complex networking requirements, the API developer will ship faster, minimize outages, and ultimately create better DX downstream for the consumer.

In our estimation, DX has two distinct phases:

  1. Simplifying the process—whether in writing code, configuring infrastructure, or negotiating with colleagues—of moving from state A to goal B.
  2. Unlocking long-term benefits and time-savings only possible once B has been achieved. Think repeatable deployments through declarative configurations, simple- or zero-configuration networking through abstraction, deploying on a single node versus being deployed on a global edge, and so on.

With that in mind, let’s walk through setting up the fundamental feature of API gateways—authentication that protects your data, infrastructure, and API consumers—in Kong versus ngrok.

Our API gateway developer experience testbed

Before we can showcase the process involved in both deployed API gateways, we need to establish the baseline API gateway deployment most developers are looking to start with—an API MVP, if you will:

  • A deployed API gateway to route traffic to one or more endpoints.
  • Integration with Auth0 as your identity provider (IdP).
  • Enforcement of JSON Web Tokens (JWTs), managed by Auth0, for authenticating API consumers.some text
    • JWTs are an efficient open standard for sending signed data as a JSON object, including a header, payload, and signature. Because they’re encrypted and can be revoked by the issuer, you can flexibly and confidently use them to protect your APIs from abuse or data loss.

Because we’re talking about deployed API gateways, we’re focused on these two products specifically:

Developer experience with Kong’s API gateway

Here’s an overview of the process of deploying a JWT-secured API using Kong Gateway.

Deploy Kong Gateway on your cloud provider of choice or on-premises. Very few organizations, save for the scrappiest of startups, would leave an API developer responsible for such a task. This is the realm of DevOps or IT, who would lead new infrastructure deployments, but we’re playing a bit of devil’s advocate here. Plus, even though these folks would abstract a lot of complexity away from you by taking the project on, you have no guarantees they would do it quickly.

As you move forward, you’ll need to investigate the various deployment topologies for Kong Gateway and decide whether you want a hybrid, traditional (database), or database-less deployment. From there, you’ll also need to explore the proper installation options to find the open-source option that works best for your organization’s existing infrastructure, such as running on Docker, Kubernetes, or Linux VMs.

From here on out, you’ll use the Admin API and curl to configure your API gateway and enable essential features like authentication.

Create a Service for your API.

curl -i -f -X POST http://localhost:8001/services \
  --data "name=your-amazing-api" \
  --data "url=http://example.com"

Create a Route.

# Using the SERVICE_ID returned in the previous step:
curl -i -f -X POST http://localhost:8001/routes \
  --data "service.id=<SERVICE_ID>" \
  --data "paths[]=/route_01"

Enable the JWT Plugin on your new Route.

curl -X POST http://localhost:8001/routes/<route-id>/plugins \
  --data "name=jwt"

Create an account with Auth0 if you don’t have one already and create a new API.

Download your X509 Certificate directly from Auth0 using your ORGANIZATION_NAME and REGION_ID:

curl -o <ORGANIZATION_NAME>.pem https://<ORGANIZATION_NAME>.<REGION_ID>.auth0.com/pem

Extract the public key from your X509 Certificate.

openssl x509 -pubkey -noout -in <COMPANYNAME>.pem > pubkey.pem

Create an API Consumer using this public key from Auth0, which is stored on your local workstation.

# Create the Consumer:
curl -i -X POST http://localhost:8001/consumers \
  --data "username=<USERNAME>"

# Attach a new JWT, based on your public key, to said consumer:
curl -i -X POST http://localhost:8001/consumers/<consumer>/jwt \
 -F "algorithm=RS256" \
 -F "rsa_public_key=@./pubkey.pem" \
 -F "key=https://<YOUR_AUTH0_TENANT>.auth0.com/"

Pass your API consumer their access_token, generated by Auth0, which they must add as an authorization: Bearer... header to their API requests.

Repeat steps 8-9 for additional API consumers or develop an automatic process for passing access tokens to API consumers directly after registration.

Developer experience with ngrok’s API gateway

Time for ngrok’s turn—let’s walk through the same process of what you’d need to do, as a developer, to deploy a production-ready API gateway for your new service:

Create an ngrok account if you don’t have one already.

Deploy the ngrok agent flexibly based on your existing infrastructure and entirely for free.ngrok’s developer-defined API gateway is built into the agent itself and requires no external databases or networking configurations. This allows you to deploy without all the complex discussions about topology and potential conflict with other services, as you inevitably would with Kong Gateway.ngrok deploys on many machines/systems, including Windows, macOS, and all the possible hardware variants that use Linux. If your organization has gone cloud native, you can configure the ngrok Kubernetes Operator and the new Kubernetes Gateway API with commands that even the most Kubernetes-shy developer can handle confidently. If you don’t want to handle the agent lifecycle, use our SDKs to embed the ngrok Agent into your Go, Rust, JavaScript, or Python backend. Once installed, you can authenticate your agent with an authtoken.

On Linux, macOS, or Windows:
ngrok config add-authtoken <TOKEN>

On Kubernetes:
export NGROK_AUTHTOKEN=<TOKEN>

With an SDK and embedded within your app:
NGROK_AUTHTOKEN="<TOKEN>" go run main.go OR cargo run …

Configure one or more routes based on how you deployed ngrok.

Create an account with Auth0 if you don’t have one already and create a new API.

Configure your ngrok Agent to enforce JWT-based authentication using the new Traffic Policy feature in a declarative .yml configuration file, which you can version control with Git for code reviews and repeatability.

inbound:
  - name: JWT Validation
    actions:
     - type: jwt-validation
    	config:
      	issuer:
         allow_list:
           - value: https://<YOUR_AUTH0_TENANT>.us.auth0.com/
      	audience:
         allow_list:
           - value: <YOUR_NGROK_DOMAIN>
      	http:
         tokens:
           - type: jwt
             method: header
             name: Authorization
             prefix: "Bearer "
      	jws:
         allowed_algorithms:
           - RS256
         keys:
           sources:
             additional_jkus:
               - https://<YOUR_AUTH0_TENANT>/.well-known/jwks.json

Pass your API consumer their access_token, generated by Auth0, which they must add as an authorization: Bearer… header to their API requests.

Repeat step 6 for additional API consumers or develop an automatic process for passing access tokens to API consumers directly after registration.

What’s next on your route to an API epiphany?

As explained earlier, the better the DX for a deployed API gateway, the faster you can deploy and enable authentication. A “time-to-first-JWT-validated-and-authenticated-call” doesn’t have quite the same ring, but it’s a metric worth considering as you validate your next deployed API gateway solution.In our estimation, ngrok improves the process side of DX in two big ways:

1. Simplified provisioning. With ngrok, most of the complex networking infrastructure needed to run a performant and flexible API gateway has been abstracted to the ngrok network edge. Our platform automatically adds global load balancing capabilities, via multiple Points of Presence (PoPs) across the globe, on top of your existing API deployment. With JWT-based authentication enforced at the edge, you get DDoS protection thrown in for free.

Trying to architect, configure, and deploy a parallel architecture on your own, as you would with Kong Gateway, would cost hundreds of engineering hours, weeks, or months and come with considerable new expenses with your cloud provider… all for a single-region deployment.

2. Straightforward configuration using familiar tools and repeatable processes. ngrok’s Traffic Policy module uses YAML and Common Expression Language (CEL) expressions for flexibly manipulating requests and responses through your deployed API gateway—configurations that instantly become source-controllable for CI/CD pipelines or GitOps deployments. There are no additional CLI tools to learn or complex deployment processes to follow.

By default, Kong Gateway requires you to use curl to interact with the Admin API, secreting all your steps into your terminal history. To be fair, the Kong folks have more recently introduced decK, which lets you use CLI commands for API delivery. decK offers more flexibility to deploy a Kong gateway declaratively, but still resorts to complex syntax, like cat kong.yml | deck file add-plugins --selector='services[*]' plugin1.json plugin2.yml, just to add plugins like JWT authentication.

The DX benefits of ngrok’s developer-defined API gateway don’t stop with process and setup.

  • Global acceleration and load balancing: If you wanted to replicate ngrok’s Global Server Load Balancing (GSLB) with another provider, you’d pay an exorbitant monthly bill to deploy in multiple regions, and then you’ll still need help from operations and network teams to help you secure and maintain it.
  • More developer ownership of operations: While operators can still add control and governance guardrails to your organization’s use of the ngrok API Gateway, API developers still have full ownership over features like rate limiting and fine-tuned request/response manipulation.

  • Development environment independence: Because ngrok abstracts away all networking and infrastructure complexity around a standard deployed API gateway like Kong, you can use a single .yml file to develop and test in any environment, whether local, CI, or multi-cloud.

Sign up for early access today to explore all things DX in the ngrok API Gateway. If you have questions, issues, or features to request, you can always find us on X, in the ngrok Slack community, or directly at support@ngrok.com.

Share this post
Joel Hans
Joel Hans helps open source and cloud native startups generate commitment through messaging and content, with clients like CNCF, Devtron, Zuplo, and others. Learn more about his writing at commitcopy.com.
API gateway
Features
Production