3 posts tagged with "cloudflare"

View All Tags

Cloudflare for DevOps: CDN, Serverless Edge & Zero Trust Powerhouse

If you’ve ever deployed a website or managed infrastructure at scale, you’ve probably heard of Cloudflare. Most folks think of it as just a CDN with DDoS protection. But dig a little deeper, and you’ll find it’s evolving into a full-blown edge platform: part DNS provider, part firewall, part serverless compute engine, and even a zero-trust network.

Let’s break down what Cloudflare really offers and how you can get the most out of it.


CDN Alternatives, DNS & DDoS Protection#

Cloudflare CDN protecting servers from DDoS and latency issues

Cloudflare started as a reverse proxy and CDN combo. It now caches your static assets in 300+ data centers globally, which drastically reduces latency and protects your origin server. Learn more about Cloudflare CDN

It also has DDoS protection built-in, handling both Layer 3/4 and Layer 7 attacks automatically — all at no extra cost. That’s huge compared to setting this up with AWS Shield or a WAF. Compare with AWS Shield

And let’s not forget DNS. Their public resolver, 1.1.1.1, is among the fastest. For domain hosting, Cloudflare DNS is blazing fast and comes with DNSSEC and other enterprise-level features — again, free. Explore 1.1.1.1 DNS


WAF, Bot Protection & Rate Limiting#

Cloudflare’s Web Application Firewall (WAF) is developer-friendly and integrates nicely with modern CI/CD pipelines. You can write custom firewall rules using their UI or even Terraform. Cloudflare WAF Documentation

Need to throttle abusive IPs or stop credential-stuffing bots? Cloudflare offers precise control. For example:

(ip.src eq 192.0.2.1 and http.request.uri.path contains "/admin")

It’s not just a firewall — it’s programmable security.


Serverless Edge Compute with Workers & Durable Objects#

Cloudflare Workers powering serverless edge compute in DevOps

Here’s where things get spicy. Cloudflare Workers let you run JavaScript or TypeScript functions directly at the edge. No need for centralized cloud regions. That means lower latency and zero cold starts.

Use cases include:

  • Lightweight APIs
  • JWT-based authentication
  • A/B testing and personalization
  • Edge-rendered SSR apps like Next.js

It’s like AWS Lambda but faster and more lightweight. Plus, with Durable Objects and Workers KV, you can manage global state effortlessly. Get started with Cloudflare Workers


Zero Trust Networking Without VPNs#

Cloudflare Zero Trust (formerly Access + Gateway) lets you secure internal apps without a VPN.

You get:

  • SSO via Google Workspace or GitHub
  • Device posture checks
  • Real-time activity logs

With Cloudflare Tunnel (Argo Tunnel), you can expose internal apps securely without public IPs. It’s perfect for remote teams or CI/CD pipelines.


S3-Compatible R2 Storage with No Egress Fees#

R2 is Cloudflare’s answer to S3, but without the painful egress fees. It’s fully S3-compatible, making it ideal for hosting media, static assets, or backups.

Imagine: you upload images to R2, process them with Workers, and boom — serverless image hosting with no Lambda, no VPC headaches.


DevOps Observability with Logpush & GraphQL#

 Illustration of Engineer analyzing observability metrics and logs with charts and dashboards

Cloudflare provides rich analytics: traffic stats, threat maps, and origin logs. Need to ship logs to S3 or a SIEM? Use Logpush.

Want custom dashboards? You can query logs with GraphQL.


GitOps, CI/CD & Infrastructure as Code with Cloudflare#

Cloudflare plays well with modern DevOps. Using their Terraform provider, you can manage WAF rules, DNS, Workers, and more as code.

For CI/CD, use Cloudflare Pages for JAMstack sites or deploy Workers using GitHub Actions:

- name: Deploy Worker
run: wrangler publish

Simple, clean, and version-controlled.


Final Thoughts: The Edge OS Is Here#

Whether you’re spinning up a personal site or managing infrastructure for an enterprise, Cloudflare likely has a tool to make your life easier.

From firewalls and serverless compute to object storage and DNS, it’s rapidly becoming an operating system for the internet edge — and a lot of it is free.

If you’re still just using it to hide your origin IP and enable HTTPS, it’s time to go deeper.

From one-click deployments to full-scale orchestration, Nife offers powerful, globally accessible solutions tailored for modern application lifecycle management — explore all our solutions and accelerate your cloud journey.

Unlock the full potential of your infrastructure with OIKOS by Nife — explore features designed to simplify orchestration, boost performance, and drive automation.

Deploy NGINX Ingress Controller on AWS EKS with HTTP and TCP Routing via Single LoadBalancer

When managing Kubernetes workloads on AWS EKS, using a LoadBalancer for each service can quickly become expensive and inefficient. A cleaner, scalable, and more cost-effective solution is to use an Ingress Controller like NGINX to expose multiple services via a single LoadBalancer. This blog will walk you through how I set up Ingress in my EKS cluster using Helm, configured host-based routing, and mapped domains through Cloudflare.


Prerequisites#

  • AWS EKS Cluster set up
  • kubectl, helm, and aws-cli configured
  • Services already running in EKS
  • Cloudflare account to manage DNS

Get started with EKS in the AWS EKS User Guide.


Step 1: Install NGINX Ingress Controller on EKS using Helm#

Diagram showing Helm deployment of NGINX Ingress Controller on EKS.
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
helm repo update
helm install nginx-ingress ingress-nginx/ingress-nginx \
--namespace ingress-nginx --create-namespace \
--set controller.service.type=LoadBalancer

This will install the NGINX Ingress Controller and expose it through a LoadBalancer service. You can get the external ELB DNS using:

kubectl get svc -n ingress-nginx

Note the EXTERNAL-IP of the nginx-ingress-controller—this is your public ELB DNS.

Learn more about NGINX Ingress at the official Kubernetes documentation.


Step 2: Create Your Ingress YAML for Host-Based Routing#

Team collaborating on Kubernetes Ingress YAML configuration for host routing.

Below is an example Ingress manifest to expose a service using a custom domain:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: pubggpiro9ypjn-ing
namespace: pubggpiro9ypjn
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
ingressClassName: nginx
rules:
- host: metube-app-622604.clb2.nifetency.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: service-f35714cd-4cb5-4f7e-b9db-4daa699640b3
port:
number: 8081

Apply the file using:

kubectl apply -f your-ingress.yaml

Step 3: Configure DNS for NGINX Ingress with Cloudflare#

DevOps engineer configuring Cloudflare DNS records for NGINX Ingress routing.

Go to your Cloudflare dashboard and create a CNAME record:

  • Name: metube-app-622604 (or any subdomain you want)
  • Target: your NGINX LoadBalancer DNS (e.g., a1b2c3d4e5f6g7.elb.amazonaws.com)
  • Proxy status: Proxied ✅

Wait for DNS propagation (~1–5 minutes), and then your service will be available via the custom domain you configured.

Understand DNS management in Cloudflare with the Cloudflare DNS docs.


Verify NGINX Ingress Routing and Domain Configuration#

Try accessing the domain in your browser:

http://metube-app-622604.clb2.nifetency.com

You should see the application running from port 8081 of the backend service.


Reference Document#

For more detailed steps and examples, check out this shared doc:
🔗 Ingress and DNS Setup Guide


Benefits of Using NGINX Ingress with Single LoadBalancer#

  • Cost-effective: One LoadBalancer for all services.
  • Scalable: Add new routes/domains by just updating the Ingress.
  • Secure: Easily integrate SSL with Cert-Manager or Cloudflare.
  • Customizable: Full control over routing, headers, and rewrites.

Conclusion: Efficient Multi-Service Exposure in EKS with NGINX#

Exposing multiple services in EKS using a single LoadBalancer with NGINX Ingress can streamline your infrastructure and reduce costs. Just remember:

  • Use Helm to install and manage the NGINX Ingress Controller
  • Configure host-based routing to serve multiple domains through one point
  • Use Cloudflare DNS to map custom domains to your LoadBalancer
  • Regularly test and validate access for each new service

With just a few commands and configurations, you can build a scalable and efficient ingress setup—ready for production.

Learn how to add and manage EKS clusters with Nife’s AWS EKS integration guide.

Learn how to add standalone Kubernetes clusters with Nife’s standalone cluster setup guide.


Deploy NGINX Ingress Controller on AWS EKS with HTTP and TCP Routing via Single LoadBalancer

When managing Kubernetes workloads on AWS EKS, using a LoadBalancer for each service can quickly become expensive and inefficient. A cleaner, scalable, and more cost-effective solution is to use an Ingress Controller like NGINX to expose multiple services via a single LoadBalancer. This blog will walk you through how I set up Ingress in my EKS cluster using Helm, configured host-based routing, and mapped domains through Cloudflare.


Prerequisites#

  • AWS EKS Cluster set up
  • kubectl, helm, and aws-cli configured
  • Services already running in EKS
  • Cloudflare account to manage DNS

Get started with EKS in the AWS EKS User Guide.


Step 1: Install NGINX Ingress Controller on EKS using Helm#

Diagram showing Helm deployment of NGINX Ingress Controller on EKS.
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
helm repo update
helm install nginx-ingress ingress-nginx/ingress-nginx \
--namespace ingress-nginx --create-namespace \
--set controller.service.type=LoadBalancer

This will install the NGINX Ingress Controller and expose it through a LoadBalancer service. You can get the external ELB DNS using:

kubectl get svc -n ingress-nginx

Note the EXTERNAL-IP of the nginx-ingress-controller—this is your public ELB DNS.

Learn more about NGINX Ingress at the official Kubernetes documentation.


Step 2: Create Your Ingress YAML for Host-Based Routing#

Team collaborating on Kubernetes Ingress YAML configuration for host routing.

Below is an example Ingress manifest to expose a service using a custom domain:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: pubggpiro9ypjn-ing
namespace: pubggpiro9ypjn
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
ingressClassName: nginx
rules:
- host: metube-app-622604.clb2.nifetency.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: service-f35714cd-4cb5-4f7e-b9db-4daa699640b3
port:
number: 8081

Apply the file using:

kubectl apply -f your-ingress.yaml

Step 3: Configure DNS for NGINX Ingress with Cloudflare#

DevOps engineer configuring Cloudflare DNS records for NGINX Ingress routing.

Go to your Cloudflare dashboard and create a CNAME record:

  • Name: metube-app-622604 (or any subdomain you want)
  • Target: your NGINX LoadBalancer DNS (e.g., a1b2c3d4e5f6g7.elb.amazonaws.com)
  • Proxy status: Proxied ✅

Wait for DNS propagation (~1–5 minutes), and then your service will be available via the custom domain you configured.

Understand DNS management in Cloudflare with the Cloudflare DNS docs.


Verify NGINX Ingress Routing and Domain Configuration#

Try accessing the domain in your browser:

http://metube-app-622604.clb2.nifetency.com

You should see the application running from port 8081 of the backend service.


Reference Document#

For more detailed steps and examples, check out this shared doc:
🔗 Ingress and DNS Setup Guide


Benefits of Using NGINX Ingress with Single LoadBalancer#

  • Cost-effective: One LoadBalancer for all services.
  • Scalable: Add new routes/domains by just updating the Ingress.
  • Secure: Easily integrate SSL with Cert-Manager or Cloudflare.
  • Customizable: Full control over routing, headers, and rewrites.

Conclusion: Efficient Multi-Service Exposure in EKS with NGINX#

Exposing multiple services in EKS using a single LoadBalancer with NGINX Ingress can streamline your infrastructure and reduce costs. Just remember:

  • Use Helm to install and manage the NGINX Ingress Controller
  • Configure host-based routing to serve multiple domains through one point
  • Use Cloudflare DNS to map custom domains to your LoadBalancer
  • Regularly test and validate access for each new service

With just a few commands and configurations, you can build a scalable and efficient ingress setup—ready for production.

Learn how to add and manage EKS clusters with Nife’s AWS EKS integration guide.

Learn how to add standalone Kubernetes clusters with Nife’s standalone cluster setup guide.