Building API Gateway And Service Mesh In gRPC Microservice Architecture
What API Gateway Do?
In microservice architecture, API gateway is responsibility for external traffic of authentication, caching, compression and rate-limiting.
Why Use Kong Ingress?
In kubernetes environment, ingress is responsibility for routing L7 network traffic. Kubernete ingress controller (KIC) as api gateway is one of solutions in microservice architecture.
Unlike kubernetes community ingress-nginx external authentication implementation. Kong ingress controller implement several token-based authentication (JWT, HMAC…) plugins which use embedded lua script without addition network request cost. Kong ingress controller is one of api gateway solutions in kubernetes ingress ecosystem.
What Service Mesh Do?
In kubernetes environment, iptable (configured by kube-proxy) is responsibility for routing service to service communication. However, iptable is L4 connection-level load balancing which route request imbalancing on single long-lived TCP connection protocol like grpc.
A general pattern of side-car or daemonset using light-weight L7 proxy is one of solutions to resolve grpc load balancing in kubernetes environment. With growth of service to service traffic like mesh, service mesh is one of solution to make service to service traffic observability and management without tears.
Why Use Istio?
Istio is one of service mesh solutions which let service to service communication observability. Istio use side car pattern which let envoy proxy intercept iptable traffic. Side car proxy with higher resource cost and increase latency because of addition hub. You can choose another solution like traefik mesh/kong mesh.
Apply Kong To External & Istio To Internal Traffic
Enable Side Car Proxy
apiVersion: v1
kind: Namespace
metadata:
...
labels:
istio-injection: enabled
Disable Side Car Proxy Inbound Traffic In Kong Ingress
Configure KIC to handle external income traffic.
annotations:
...
traffic.sidecar.istio.io/includeInboundPorts: ""
Apply gRPC Protocol To Ingress
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
...
annotations:
...
konghq.com/protocols: "grpc,grpcs"
konghq.com/preserve-host: "false"
...
Apply gRPC Protocol and Service Upstream To Service
In service, KIC create connection direct to list of pods for skip kubernates service (iptable which is configured by kube-proxy). However, istio side car proxy will intercept traffic to iptable. We need configure KIC for distributing traffic to kubernates service.
apiVersion: v1
kind: Service
metadata:
...
annotations:
ingress.kubernetes.io/service-upstream: "true"
konghq.com/protocol: "grpc"
...
Is AWS NLB Or ALB Provisioned By Kong Ingress?
AWS NLB is L4 load balancer which is more performant than AWS ALB. However, it’s L4 connection-level load balancing which will distribute grpc load imbalancing. And AWS ALB is L7 load balancer which distribute grpc load balancing without tears.
According to kong resource sizing guidelines, one worker process runs per number of available CPU cores, kong recommend allowing for around 500MB of memory allocated per worker process. Followed tips if AWS NLB OR ALB is provisioned.
If AWS NLB is Provisioned
Vertical scalability could be used to configure kong ingress. you can create 1 kong ingress replica and configure it by using multiple worker processes (multiple CPU cores).
Horizontal scalability could also be used to configure kong ingress if you could tolerate L4 connection-level load balancing.
If AWS ALB is Provisioned
Network traffic could be distributed perfect by AWS ALB to multiple kong ingress replicas. You can configure each kong ingress of 1 worker process with 1 CPU core, at least 500MB for 1 worker process.
Summary
In kubernetes environment, we use kong ingress as api gateway to handle external traffic and istio service mesh to take over internal traffic in grpc microservice architecture.
Reference
gRPC Load Balancing on Kubernetes without Tears
Kong Ingress Controller and Service Mesh: Setting up Ingress to Istio on Kubernetes