API gateway gallery: Drop-in API policy management examples

API developers deserve API gateways that are flexible enough to operate in whichever way gets them to production fastest. Simple enough for them to understand the current state without having to loop in a peer in operations. Programmable enough to quickly make the necessary changes to protect their APIs reliability, performance, and developer experience for the consumer downstream.

While other API gateway providers seem to operate under the assumption that it’s impossible to achieve, ngrok starts with an API gateway that’s truly developer-defined.

Let’s take a closer look at the difference between ngrok’s API toolkit and the entrenched (aka expensive) coterie of deployed and cloud API gateways—but if you’re already onboard and just want to see what policy management magic you can get up to in a few minutes, feel free to skip down to the gallery.

Does your API gateway make policy management accessible to developers?

Unfortunately, if you’re using any of the most popular deployed or cloud API gateways, the answer ranges from a pained “not particularly” to a flat-out “that’s impossible.”

Most deployed API gateways come with missing signs, potholes, and guardrails that are a little too ambitious, not just keeping you on the road, but forcing you into a lane that keeps getting smaller. As a developer, trying to enact change on these API gateways is expensive, cumbersome, and slow, because they:

  • Force you to create more than one deployment to cover multiple regions, which means you’re actually maintaining two or more separate gateways to have a global presence.
  • Ask you to pay extra for policy plugins you consider essential, like advanced authentication, or request/response modification.
  • Require weeks or months of coordination with operation teams to spin up.
  • Often rely on tools and languages you’re not familiar with, like XML (yikes) and CSharpScript (double yikes), or force you to install entire ecosystems of tools (Make, Docker, Go, plus “special” images) just to write a basic custom policy for your API.

On the other hand, cloud-based API gateways are often easier to deploy and simpler to use than their deployed counterparts, but are often far more limited in features and lock you into specific environments. You can’t add all the policies you’d like or go multi-cloud without once again begging your operations peers for help that might take them days of work and weeks of waiting for sign-offs from networking and security stakeholders you barely know.

How ngrok lets you quickly add API policy and traffic management

A truly developer-defined API gateway allows you to flexibly deploy and configure in ways that best serve your API consumer. With ngrok’s API gateway, you can:

  1. Deploy the ngrok agent in whichever way best gets your API to your consumers quickly and reliably, including directly on a Linux/macOS/Windows system, within any Kubernetes cluster, or directly within your Go, JavaScript, Python or Rust app with one of our SDKs.
  2. Configure your API gateway either (or both) at the agent level or as an Edge in the ngrok dashboard.
  3. Run all policy and traffic management workloads on the ngrok Cloud Edge at a Point of Presence (PoP) closest to your API consumer, for a consistent and consistently fast global presence.

With ngrok, you enable ingress at the runtime level (and can even configure it there, too) but also decouple its operation. Your API is then portable across all possible environments, letting you freely test it locally, in a CI/CD environment, or on multiple cloud providers with identical behavior and results for your API consumer.

At the heart of this flexibility is our new Traffic Policy module, which provides a flexible, programmable, and uniform approach to managing API requests and responses across all the ways you use ngrok. This module lets you securely connect your APIs, whether they’re in local testing environments or production deployments, using a single configuration, with support for essential security and availability policies like JWT authentication and rate limiting.

Unlike both traditional deployed API gateways and their newer cloud alternatives, ngrok’s developer-defined option is feature-rich, works everywhere you do, and lets you self-serve your way to production without the operational headaches, red tape, or explosive costs.

Today’s testbed: a simple Go-based API using the ngrok SDK

Sometimes you just don’t want to distribute another binary or manage a separate process to start accepting traffic on your new API—that’s the entire pain point of deployed API gateways, after all. When you embed the ngrok agent directly into your app using one of our SDKs, you can build business logic and ingress at the same time, and in the same repository, using all your favorite tools.

If you’d like to see this lifecycle in action, you can quickly deploy an API using Go and the ngrok SDK from your local workstation. The API is supremely simple—it doesn’t have a database or even fully-functioning CRUD—but it will adequately show the flexibility and programmability of the ngrok API gateway.

If you’d like to just start shaping traffic already, you can skip directly to the examples.

Start by setting up a basic Go project with the packages you’ll need.

mkdir legends-api
cd legends-api
go mod init legends-api
go get golang.ngrok.com/ngrok github.com/gorilla/mux


Create main.go and paste the following Go code into it.

// A simple API for legendary animals.

package main

import (
	"context"
	"log"
	"os"
	"net/http"
	"encoding/json"
	"strconv"

	"github.com/gorilla/mux"
	"golang.ngrok.com/ngrok"
	"golang.ngrok.com/ngrok/config"
)

type Legend struct {
	ID			string `json:"id"`
	Name		string `json:"name"`
	Type    string `json:"type"`
	Origin 	string `json:"origin"`
}

var legends []Legend

func getLegends(w http.ResponseWriter, r *http.Request) {
	w.Header().Set("Content-Type", "application/json")
	json.NewEncoder(w).Encode(legends)
}

func createLegend(w http.ResponseWriter, r *http.Request) {
	w.Header().Set("Content-Type", "application/json")
	var newLegend Legend
	json.NewDecoder(r.Body).Decode(&newLegend)
	newLegend.ID = strconv.Itoa(len(legends) +1)
	legends = append(legends, newLegend)
	json.NewEncoder(w).Encode(newLegend)
}

func main() {
	if err := run(context.Background()); err != nil {
		log.Fatal(err)
	}
}

func run(ctx context.Context) error {
	ln, err := ngrok.Listen(ctx,
		config.LabeledTunnel(
			config.WithLabel("edge", os.Getenv("NGROK_LABEL")),
		),
		ngrok.WithAuthtokenFromEnv(),
	)
	if err != nil {
		return err
	}

	router:=mux.NewRouter()
	router.HandleFunc("/legend", getLegends).Methods("GET")
	router.HandleFunc("/legend", createLegend).Methods("POST")

	log.Println("ngrok API gateway established:")
	log.Println("Tunnel:", ln.ID())
	log.Println("Edge(s):")
	for key, value := range ln.Labels() {
		log.Println(key, value)
	}

	return http.Serve(ln, router)
}


Create an ngrok domain, which we’ll refer to as {YOUR_NGROK_DOMAIN} from here on out.

Next, you want to manage your API gateway with an Edge. Head over to Edges -> New Edge -> Attach a domain I already have, and choose the domain you just created. You can now configure your ngrok agent to attach a new tunnel to that Edge. Look just under the name of your Edge to see a label string that begins with edghts_ and copy it.

Paste both the Edge label, followed by your ngrok Authtoken, into the command below:

NGROK_AUTHTOKEN=<YOUR-NGROK-AUTHTOKEN> NGROK_LABEL=<YOUR-EDGE-LABEL> go run main.go


If you refresh your ngrok dashboard, you’ll see that you have a tunnel running.

Now you can create a POST request to your new API:

curl --header "Content-Type: application/json" \
  --request POST \
  --data '{"name": "Zhenniao","type": "Bird","origin": "China"}' \
  https://{YOUR_NGROK_DOMAIN}/legend


The response indicates you added a new legendary creature successfully, and could continue expanding your “database” as desired:

{"id":"1","name":"Zhenniao","type":"Bird","origin":"China"}

Drop-in API policy management examples with ngrok

As mentioned earlier, you can configure your ngrok API gateway in two ways:

  1. At your ngrok Edge using a web-based editor.
    You gain a few advantages when you establish API policy management at the Edge level. First, you can apply any of the YAML-based drop-in policies shown below agnostically of how you’re using ngrok, which means you don’t need to learn and apply multiple syntaxes and patterns. Second, applying policy at the ngrok Edge won’t interrupt the lifecycle of your upstream server. Finally, you can attach multiple ngrok agents to a single Edge—for example, if you’re deploying your API from multiple regions—to manage and apply policies across all of them consistently and instantly.
  1. Directly with the ngrok agent as an ngrok endpoint.
    You can configure the agent itself—such as via the Go SDK, agent CLI, and beyond—to store your API gateway configurations as close to your business logic as possible. This lets you more tightly version-control your policies and makes your deployments declarative and repeatable.

No matter how you decide to apply your API policies, just remember they are evaluated at runtime and in sequential order, so place your highest-priority policies at the top. Only policies without expressions, or those with expressions that return true, are executed.

The drop-in API policy management examples below use the first option: on the Edge and using YAML. You can edit an existing Edge by opening the Traffic Policy module. Click Edit Traffic Policy and paste in any drop-in policy below or mix-and-match actions based on what you need from your API gateway or what provides the best experience for your consumers. When you’re done, click Save at the top-right of the ngrok dashboard to apply your new API traffic policy instantly.

Template #1: Add JWT authentication and key-based rate limiting

This drop-in policy is the de facto standard of all API gateways. It rejects access to your API to those who haven't properly authenticated their machine-to-machine requests with JSON Web Tokens (JWTs) and restricts their usage to reasonable limits. This prevents an accidental distributed denial-of-service (DDoS) attack on your upstream service and helps control your costs.

For this policy to work, you must have defined your API with an identity provider like Auth0, which issues JWTs on your behalf for ngrok to validate with every subsequent request.

on_http_request:
  - expressions: []
    name: Add JWT authentication and rate limiting
    actions:
      - type: rate-limit
        config:
          name: Only allow 30 requests per minute
          algorithm: sliding_window
          capacity: 30
          rate: 60s
          bucket_key:
            - req.Headers['x-api-key']
      - type: jwt-validation
        config:
          issuer:
            allow_list:
              - value: https://<YOUR-AUTH-PROVIDER>
          audience:
            allow_list:
              - value: {YOUR_NGROK_DOMAIN}
          http:
            tokens:
              - type: jwt
                method: header
                name: Authorization
                prefix: "Bearer "
          jws:
            allowed_algorithms:
              - RS256
            keys:
              sources:
                additional_jkus:
                  - https://<YOUR-AUTH-PROVIDER>/.well-known/jwks.json

Template #2: Rate limit API consumers based on authentication status

If you have a public API, you may want to let consumers try it out, albeit with strong restrictions, but also allow those who have signed up for your service and received their authentication token to access it more freely.

In the example below, ngrok applies two tiers of rate limiting: 10 requests/minute for unauthorized users and 100 requests/minute for users with a JWT token and the appropriate Authorization request header.

inbound:
  - expressions:
      - "!('Authorization' in req.Headers)"
    name: Unauthorized rate limiting tier
    actions:
      - type: rate-limit
        config:
          name: Allow 10 requests per minute
          algorithm: sliding_window
          capacity: 10
          rate: 60s
          bucket_key:
            - conn.ClientIP
  - expressions:
      - ('Authorization' in req.Headers)
    name: Authorized rate limiting tier
    actions:
      - type: rate-limit
        config:
          name: Allow 100 requests per minute
          algorithm: sliding_window
          capacity: 100
          rate: 60s
          bucket_key:
            - conn.ClientIP
outbound: []

Template #3: Rate limit API consumers based on pricing tiers

This policy enforces three tiers of rate limiting—free, bronze, silver, and gold—based on the headers present in API requests—or lack thereof. 

You would then need to instruct your API consumers to use the appropriate header based on their pricing tier, ideally through your developer documentation.

on_http_request:
  - expressions:
      - "!('Tier' in req.Headers)"
    name: Free rate limiting tier
    actions:
      - type: rate-limit
        config:
          name: Allow 10 requests per minute
          algorithm: sliding_window
          capacity: 10
          rate: 60s
          bucket_key:
            - conn.ClientIP
  - expressions:
      - getReqHeader('tier').exists(v, v.matches('(?i)bronze'))
    name: Bronze rate limiting tier
    actions:
      - type: rate-limit
        config:
          name: Allow 100 requests per minute
          algorithm: sliding_window
          capacity: 100
          rate: 60s
          bucket_key:
            - conn.ClientIP
  - expressions:
      - getReqHeader('tier').exists(v, v.matches('(?i)silver'))
    name: Bronze rate limiting tier
    actions:
      - type: rate-limit
        config:
          name: Allow 1000 requests per minute
          algorithm: sliding_window
          capacity: 1000
          rate: 60s
          bucket_key:
            - conn.ClientIP
  - expressions:
      - getReqHeader('tier').exists(v, v.matches('(?i)gold'))
    name: Gold rate limiting tier
    actions:
      - type: rate-limit
        config:
          name: Allow 10000 requests per minute
          algorithm: sliding_window
          capacity: 10000
          rate: 60s
          bucket_key:
            - conn.ClientIP
outbound: []


Looking for a quick way to test your new drop-in rate limiting policies? This loop prints out the response status code from curl, showing you exactly when good 200 status codes become 429, indicating Too Many Requests.

for i in `seq 1 20`; do \
  curl -s -o /dev/null \
    -w "\n%{http_code}" \
    -X GET https://{YOUR_NGROK_DOMAIN}/legend ; \
  done

Template #4: Block traffic from specific countries

Sometimes, you must refuse traffic from specific countries due to internal policy or sanctions applied by the country from which you operate. With the conn.Geo.CountryCode connection variable, ngrok's API gateway lets you send a custom response with a status code and content to deliver as much context as you want or are required to provide to the failed request.

Replace {COUNTRY_01} and {COUNTRY-02}, or add more countries with any of the standard ISO country codes.

on_http_request:
  - expressions:
      - conn.Geo.CountryCode in ['{COUNTRY_01}', '{COUNTRY_02}>']
    name: Block traffic from unwanted countries
    actions:
      - type: custom-response
        config:
          status_code: 401
          content: 'Unauthorized request due to country of origin'

Template #5: Maintain and deprecate API versions

As you continue improving your API, whether to add features or fix security flaws, you’ll eventually want to migrate consumers to newer versions. If your developer documentation instructs consumers to use an X-Api-Version header with their requests, you can quickly increment the supported version and deny requests to others.

This example also demonstrates how your custom responses can also be formatted in JSON.

on_http_request:
  - expressions:
      - "'2' in req.Headers['X-Api-Version']"
    name: Deprecate API v2
    actions:
      - type: custom-response
        config:
          status_code: 400
          content: >
            {
              "error": {
                "message": "Version 2 of the API is no longer supported. Use Version 3 instead."
              }
            }

Template #6: Manipulate headers on inbound requests

When you manipulate headers on requests, you can then provide your upstream service more context and detail to perform custom business logic. If your API returns prices on goods for sale, for example, your upstream service could localize prices using the API consumer’s country code.

Your headers can use arbitrary strings, like the is-ngrok header in the example before, or any request variable.

on_http_request:
  - expressions: []
    name: Add headers to requests
    actions:
      - type: add-headers
        config:
          headers:
            is-ngrok: "1"
            country: ${.ngrok.geo.country_code}

Template #7: Add compression to your responses

If your upstream service can't compress responses or you would like ngrok to do the work, you can compress all responses using the gzip, deflate, br, or compress algorithms.

inbound: []
outbound:
  - expressions: []
    name: Add compression
    actions:
      - type: compress-response
        config:
          algorithms:
            - gzip
            - br
            - deflate
            - compress

Template #8: Enforce the TLS version of requests

ngrok’s API gateway lets you quickly add checks to requests to ensure they meet your internal security requirements and send an informative error message if not.

on_http_request:
  - expressions:
      - req.ClientTLS.Version > '1.3'
    name: Reject requests using old TLS versions
    actions:
      - type: custom-response
        config:
          status_code: 401
          content: "Unauthorized: bad TLS version"

Template #9: Log unsuccessful events to your observability platform

This API policy logs every unsuccessful request to ngrok's eventing system, by checking all responses with status codes of less than 200 or greater than or equal to 300, letting you observe the effectiveness of any API traffic policy in real time.

on_http_response:
  - expressions:
      - res.StatusCode < '200' && res.StatusCode >= '300'
    name: Log unsuccessful requests
    actions:
      - type: log
        config:
          metadata:
            message: Unsuccessful request
            edge_id: {YOUR_NGROK_DOMAIN}
            success: false

Template #10: Limit request (POST/PUT) size limits

If your API accepts new documents or updates to existing ones via user input, you could be at risk of excessively large requests—either accidental or malicious in origin—that create performance bottlenecks in your upstream server or excessive costs due to higher resource usage.

on_http_request:
  - expressions
      - req.Method == 'POST' || req.Method == 'PUT'
      - req.ContentLength >= 1000
    name: Block POST/PUT reqests of excessive length
    actions:
      - type: custom-response
        config:
          status_code: 400
          content: 'Error: content length'

What’s next?

Get started with the ngrok API gateway by signing up for ngrok and checking out the Traffic Policy engine on your first Edge. Once your ngrok agent is running, you can use these drop-in API policy management examples and start shaping the security and availability of your endpoints in a few minutes.

Don’t be afraid to experiment with API policies! Feel free to mix and match the examples provided, add in additional actions we haven’t covered, and even try your hand at custom logic using the Common Expression Language (CEL) expressions at your disposal. When you apply API policies directly on the ngrok dashboard, we’ll validate your syntax and suggest improvements to ensure your upstream service is always accessible.

We’re also building a Rule Gallery in our documentation for common-to-unconventional use cases for API policy management. If you extend one of the drop-in templates or create your own, we’d love to see a pull request in the ngrok-docs repository or a message on the ngrok community repo about what you've built.

Share this post
Joel Hans
Joel Hans is a Senior Developer Educator at ngrok. He has plenty of strong thoughts about documentation, developer education, developer marketing, and more.
API gateway
Traffic Policy
Gateways
Production