AWS Setup and Permissions Guide for Orchestr8
Prerequisites
CLI Tool Requirements
Before deploying to AWS, ensure you have the required command-line tools. See the Prerequisites Guide for complete installation instructions for all tools.
Required for AWS:
- Core tools:
kubectl,helm,git(required for all providers) - AWS-specific:
awsCLI for authentication and management
Quick Verification:
# Verify all required tools are installed
o8 doctor --verbose
# Show installation instructions for missing tools
o8 doctor --fix
AWS Account Requirements
- Active AWS account with billing enabled
- Administrative access or specific IAM permissions (see below)
Required IAM Permissions
Your AWS account/role needs the following permissions for Orchestr8 deployment:
Core Services:
- EC2: Full access for cluster nodes and networking
- EKS: Full access for Kubernetes cluster management
- IAM: Role and policy management for service accounts
- VPC: Networking configuration and security groups
Additional Services:
- Route53: DNS management for ingress
- ACM: SSL certificate management
- Secrets Manager: External secrets integration
- CloudWatch: Monitoring and logging
- ECR: Container image registry
Authentication Setup
Step 1: Install AWS CLI
# Windows (using Chocolatey)
choco install awscli
# Or download from: https://aws.amazon.com/cli/
Step 2: Configure AWS CLI
# Configure AWS credentials
aws configure
# Verify authentication
aws sts get-caller-identity
Step 3: Set up IAM Role (Recommended for Production)
# Create service role for Orchestr8
aws iam create-role --role-name Orchestr8DeploymentRole \
--assume-role-policy-document file://trust-policy.json
# Attach required policies
aws iam attach-role-policy --role-name Orchestr8DeploymentRole \
--policy-arn arn:aws:iam::aws:policy/AmazonEKSClusterPolicy
aws iam attach-role-policy --role-name Orchestr8DeploymentRole \
--policy-arn arn:aws:iam::aws:policy/AmazonEKSWorkerNodePolicy
aws iam attach-role-policy --role-name Orchestr8DeploymentRole \
--policy-arn arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy
Provisioning Infrastructure with O8
Once authentication is set up:
# Install O8
uv tool install orchestr8-platform
# Provision AWS infrastructure and deploy platform
o8 setup \
--provider aws \
--provision-infrastructure \
--domain your-domain.com \
--github-org your-github-org \
--cluster cluster-name \
--region us-east-1 \
--environment dev
Advanced Configuration Options
# Production deployment with custom settings
o8 setup \
--provider aws \
--provision-infrastructure \
--cluster production-cluster \
--region us-east-1 \
--node-instance-type m5.large \
--min-nodes 3 \
--max-nodes 10 \
--enable-private-cluster \
--enable-irsa \
--environment production
Infrastructure Components Created
When O8 provisions AWS infrastructure, it creates:
1. EKS Cluster
- Managed Kubernetes control plane
- Auto-scaling node groups (2-10 nodes)
- Private or public cluster endpoints
- IRSA (IAM Roles for Service Accounts) enabled
2. Networking
- VPC with public and private subnets
- Internet Gateway and NAT Gateways
- Security groups for cluster communication
- Application Load Balancer for ingress
3. IAM Resources
- EKS cluster service role
- Node group instance profile
- Service account roles for ArgoCD, External Secrets
- OIDC identity provider for IRSA
4. Storage & Registry
- EBS CSI driver for persistent volumes
- ECR repositories for container images
- Optional EFS for shared storage
5. Platform Services
- ArgoCD deployed via Helm
- AWS Load Balancer Controller
- External Secrets Operator with Secrets Manager
- Cert-manager with Route53 DNS validation
Cost Optimization
Development Environments
- Spot instances: Up to 70% cost savings for nodes
- t3.medium nodes: Cost-effective for development
- Single AZ deployment: Reduced data transfer costs
- Auto-scaling: Scale to zero during off-hours
Production Considerations
- Reserved instances: Long-term cost savings
- Multiple AZs: High availability with cost trade-offs
- Right-sizing: Use AWS Cost Explorer recommendations
- Resource tagging: Track costs by environment/team
Troubleshooting
Error: "UnauthorizedOperation"
Insufficient AWS permissions. Ensure your user/role has:
# Check current permissions
aws iam list-attached-user-policies --user-name $(aws sts get-caller-identity --query User.UserName --output text)
# Create custom policy for Orchestr8
aws iam create-policy --policy-name Orchestr8FullAccess \
--policy-document file://orchestr8-policy.json
Error: "VPC limit exceeded"
AWS account limits reached:
# Check VPC limits
aws ec2 describe-account-limits --limit VpcLimitExceeded
# Request limit increase through AWS Support
Error: "EKS cluster creation failed"
Common issues:
- Subnet configuration: Ensure proper CIDR blocks
- IAM roles: Verify EKS service role permissions
- Region capacity: Try different availability zones
# Check EKS service limits
aws eks describe-cluster --name cluster-name --region us-east-1
Error: "LoadBalancer creation timeout"
ALB controller issues:
# Check ALB controller logs
kubectl logs -n kube-system deployment/aws-load-balancer-controller
# Verify IAM role for service account
kubectl describe serviceaccount aws-load-balancer-controller -n kube-system
Security Best Practices
Network Security
- Private clusters: Keep API server endpoints private
- Security groups: Minimal required access only
- VPC endpoints: Avoid internet traffic for AWS services
Identity & Access
- IRSA: Use IAM roles instead of access keys
- Least privilege: Minimal required permissions
- Service accounts: Dedicated roles per service
Data Protection
- EBS encryption: Encrypt persistent volumes
- Secrets Manager: Never store secrets in code
- KMS keys: Customer-managed encryption keys
Next Steps
After infrastructure is provisioned:
1. Access EKS Cluster
# Update kubeconfig
aws eks update-kubeconfig --region us-east-1 --name cluster-name
# Verify cluster access
kubectl get nodes
2. Access ArgoCD
# Port-forward to ArgoCD
kubectl port-forward svc/argocd-server -n argocd 8080:80
# Get admin password
kubectl -n argocd get secret argocd-initial-admin-secret \
-o jsonpath="{.data.password}" | base64 -d
3. Configure DNS
# Get ALB hostname
kubectl get ingress -n argocd argocd-server-ingress \
-o jsonpath='{.status.loadBalancer.ingress[0].hostname}'
# Create Route53 record pointing to ALB
Destroying Infrastructure
To tear down all AWS resources:
o8 destroy --cluster cluster-name --region us-east-1
This will:
- Delete the EKS cluster and node groups
- Remove all associated IAM roles and policies
- Clean up VPC and networking resources
- Preserve Terraform state for recovery
warning
Ensure all persistent volumes and snapshots are backed up before destroying infrastructure.