Traffic Flow in AWS Infrastructure
The traffic flow within Bragabot's AWS infrastructure is carefully designed to ensure secure, efficient, and reliable delivery of requests from clients to the backend services and back. This flow handles everything from initial client requests coming in from the internet to routing through various layers of the system, ensuring that each request is processed by the appropriate service. By leveraging AWS components such as Elastic Load Balancers, Virtual Private Cloud (VPC), and EC2 instances, Bragabot ensures high availability, security, and scalability.
This section provides a breakdown of the journey a request takes through the infrastructure, highlighting key components and their roles in the process:
Client Request:
Initiation: The process begins when a client (Bragabot Mini App user) initiates a request. This could be anything from a user sending a command to the Telegram bot to a group admin accessing the Bragabot dashboard.
Entry Point: The client request first hits the Elastic Load Balancer (ELB), which is exposed to the public internet. The ELB is located within the Public Subnet of the AWS Virtual Private Cloud (VPC).
Elastic Load Balancer (ELB):
Routing: The ELB’s primary role is to distribute incoming traffic across multiple backend instances located within the Private Subnet of the VPC. The ELB ensures that the load is evenly distributed, enhancing the system's availability and fault tolerance.
Health Checks: The ELB continuously monitors the health of the EC2 instances it routes traffic to, ensuring that only healthy instances receive traffic.
Internet Gateway and VPC:
Internet Gateway: After passing through the ELB, the traffic enters the AWS Internet Gateway, which allows communication between instances within the VPC and the internet. This gateway acts as a bridge between the public internet and Bragabot's internal resources.
Virtual Private Cloud (VPC): The VPC serves as a secure and isolated network environment where Bragabot’s services are hosted. It contains both Public and Private Subnets, ensuring that sensitive components are shielded from direct internet exposure.
API Gateway:
Request Management: Once inside the VPC, the request is forwarded to the API Gateway, which serves as the unified entry point for all incoming API requests. The API Gateway handles routing, security checks, rate limiting, and request aggregation.
Authentication: The API Gateway verifies the request's authentication credentials, ensuring that only authorized users can access the backend services.
Kubernetes Cluster (NGINX Ingress Controller):
Traffic Management: The API Gateway directs the traffic to the NGINX Ingress Controller within the Kubernetes cluster, which manages the routing of HTTP and HTTPS traffic. It provides SSL termination, load balancing, and name-based routing, ensuring that the request reaches the correct service.
Pod Routing: The Ingress Controller routes the request to the appropriate Kubernetes Service, which further directs the request to the relevant Pods running the microservices.
Microservices Processing:
Service Execution: The request is processed by the relevant microservice, which can be handling user management, tweet forwarding, raid creation, or another core functionality. These microservices are encapsulated within Docker containers, running on EC2 Instances within the Private Subnet.
Data Interaction: During processing, the microservice can interact with the MongoDB database to retrieve or store data. This interaction occurs within the secure boundaries of the VPC, ensuring data integrity and security.
Data Layer Interaction:
MongoDB and Firebase: Depending on the nature of the request, the microservice can access data stored in MongoDB or media files stored in Firebase. MongoDB handles the storage of structured and unstructured data, while Firebase is used for scalable storage of media and other assets.
Caching: Frequently accessed data is retrieved from Redis, an in-memory cache to minimize latency and improve response times.
Asynchronous Task Handling:
Job Queues: For tasks that require background processing (e.g., sending notifications for raids, processing tweets), the microservice enqueues the task in a job queue managed by Telegram’s job scheduling system. These tasks are processed asynchronously, ensuring that the main service remains responsive.
Horizontal Pod Autoscaler (HPA):
Scaling: The Horizontal Pod Autoscaler (HPA) monitors the load on the Pods and scales the number of replicas up or down based on CPU usage or custom metrics. This ensures that Bragabot can handle varying loads efficiently.
Response:
Completion: After processing the request, the microservice sends the response back through the Kubernetes Service and the NGINX Ingress Controller.
Gateway Return: The response is routed through the API Gateway, which aggregates any necessary data before forwarding it back through the ELB.
Client Delivery: Finally, the ELB sends the response back to the client, completing the cycle. The client receives the requested data or confirmation of the action they initiated.
The traffic flow within Bragabot’s AWS infrastructure is designed for optimal efficiency, security, and scalability. By utilizing AWS services like ELB, VPC, and EC2, along with Kubernetes for container orchestration and Django for backend processing, Bragabot ensures that user requests are handled smoothly and reliably. This architecture allows Bragabot to scale dynamically, maintain high availability, and deliver a consistent user experience across all its services.
Last updated