What Is RC on X?
RC on X refers to remote container execution capabilities available through X platform services. This infrastructure lets developers deploy containerized applications without managing underlying hardware. X handles scaling, networking, and resource allocation automatically.
The system supports Docker containers natively. You push your image, configure deployment parameters, and RC on X provisions the environment. Costs scale with CPU allocation and runtime duration. A typical setup runs $0.05–$0.30 per hour depending on instance size.
Key differentiators from competitors: native integration with X's ecosystem, simpler authentication, and built-in monitoring dashboards. No complex IAM policies required for basic deployments. Most teams see 40% faster setup compared to AWS Fargate equivalents.
Core Architecture and How It Works
RC on X operates across three layers: the container registry, the orchestration engine, and the runtime environment. Your Docker image lives in X's container registry. The orchestration engine reads your manifest file (typically YAML-based) and schedules containers across availability zones. Runtime handles actual execution, resource limiting, and network isolation.
Architecture specifics matter. X deploys containers across redundant infrastructure in 6 geographic regions. Each region contains 3+ availability zones. Network latency between zones averages 5–12ms. Storage integrates directly with X's object storage service at no additional cost for the first 100GB monthly.
The platform enforces strict resource limits: maximum 32GB RAM per container, 8 vCPU allocation, and 1TB ephemeral storage. Request these resources through the manifest. Overprovisioning wastes money. Underprovisioning causes throttling and timeout failures. Start conservative—most workloads need 2GB RAM and 1 vCPU.
Setting Up Your First RC on X Deployment
Step 1: Containerize your application. Create a Dockerfile that includes all dependencies. Use multi-stage builds to reduce image size. Example: a Python FastAPI app should result in an image under 500MB. Large images (over 2GB) take 8–15 minutes to deploy.
Step 2: Push to X's container registry. Authentication uses API tokens tied to your X account. Command: docker push registry.x-platform.io/your-username/app-name:latest. Registry stores unlimited images. Bandwidth is free outbound to RC on X instances.
Step 3: Create a deployment manifest. This YAML file specifies the image, resource allocation, environment variables, and networking rules. Example configuration allocates 2 vCPU, 4GB RAM, and opens port 8080 to public traffic. Manifests typically run 30–50 lines.
Step 4: Deploy via X's CLI or dashboard. CLI command: x rc deploy --manifest config.yaml. Dashboard deployment takes 2–3 minutes to provision. CLI deployment is faster (45–60 seconds). Both methods validate the manifest before execution.
Configuration Parameters and Best Practices
Environment variables control application behavior. Pass 20+ variables without performance impact. Use X's secrets manager for API keys and credentials (not plaintext in manifests). Each secret reference adds <5ms latency on startup.
Port binding requires explicit declaration. The platform assigns a public IP automatically. Static IPs cost $2/month. Most applications work fine with dynamic IPs—change them only if external services whitelist your address. Domain names pointing to RC on X instances propagate globally in under 60 seconds.
Startup commands matter tremendously. Define health check endpoints—RC on X probes your application every 30 seconds. Failed health checks trigger automatic container restarts. HTTP status 200 indicates healthy. Timeouts set to 10 seconds by default. Adjust if your app needs longer initialization.
Volume mounting integrates with X's storage service. Mount up to 5 volumes per container. Storage performs at 500 IOPS baseline, 5,000 IOPS burst. Database applications benefit from dedicated volumes. Ephemeral storage (local SSD) is faster but lost on restart—use for caching only. Database files belong on persistent volumes.
Networking, Security, and Traffic Management
RC on X containers get automatic network isolation through internal firewalls. Inbound traffic only flows on ports you explicitly enable. Outbound traffic is unrestricted. Public IPs assigned by default. Restrict traffic using security groups (think AWS security groups). Rules are stateful—responses from external servers pass through without additional rules.
Load balancing distributes traffic across multiple instances automatically. Deploy 3+ identical containers and RC on X distributes requests using round-robin. Session affinity available via cookie-based routing. Performance: average response time increases <2% under 1,000 concurrent connections.
HTTPS/TLS is mandatory for production. X provides free certificates via Let's Encrypt integration. Certificates auto-renew 30 days before expiration. TLS termination happens at the platform edge, not your container. This saves 15–20% CPU on encrypted workloads. Cipher suites updated monthly automatically.
DDoS protection included in all plans. Platform absorbs attacks up to 10Gbps. Larger attacks trigger mitigation mode—legitimate traffic may experience 500–1000ms additional latency during active scrubbing. Mitigation events are rare (0.8% of accounts experience one annually).
Monitoring, Logging, and Debugging
Every RC on X container generates metrics automatically: CPU usage, memory consumption, network I/O, and request latency. Metrics stream to X's monitoring service. Query any metric from the past 90 days. Longer retention costs $0.10 per metric per month.
Logs are collected in real-time. Stdout and stderr stream to X's log aggregation service. Search logs by container ID, timestamp, or text pattern. Typical query completes in under 2 seconds across 100GB of logs. Log retention: 14 days free, then $0.50 per GB monthly.
Custom dashboards display your metrics and logs together. Build dashboards through the web console (5 minutes) or via API calls (for automation). Alerts trigger on threshold breaches: CPU exceeding 80% for 5 minutes, memory spikes, or error rate increases. Alerts integrate with Slack, PagerDuty, and email.
Container debugging requires SSH access. By default, containers don't include SSH servers. Exceptions: enable debug mode in the manifest (adds 50MB, disables in production automatically). Alternative: use container exec feature to run bash directly. Exec sessions timeout after 30 minutes of inactivity.
Scaling, Performance, and Optimization
Auto-scaling adjusts container count automatically based on demand. Configure minimum (1–10 containers) and maximum (10–500 containers) thresholds. Scaling metrics: CPU utilization, memory usage, or custom metrics from your application. Target CPU is typically 70%—containers scale up when approaching this limit.
Scale-up latency matters. New containers provision in 45–90 seconds. Plan for this lag. If you need sub-minute scaling, pre-provision extra capacity. Trade cost for performance: add 20% extra instances (idle cost is $5–10/month) to guarantee instant scaling.
Performance optimization reduces operating costs directly. A 1-second reduction in average response time cuts CPU usage 8–12%. Cache aggressively. Use Redis for session data (included free up to 10GB). Query optimization in database applications saves $100–500 monthly in CPU charges.
Cost per request: median application costs $0.0003 per request. This includes compute, storage, and egress bandwidth. Expensive requests (5+ seconds) cost 15x more. Profile your application. Identify slow endpoints. Fix them. Bulk of savings comes from fixing the slowest 5% of traffic.
Pricing Models and Cost Optimization
RC on X pricing breaks into three components: compute (vCPU and RAM), storage, and data transfer. Compute costs $0.08 per vCPU-hour and $0.01 per GB-hour. Monthly reserved capacity saves 35%. Example: 4 vCPU, 8GB RAM running continuously costs $118/month on-demand or $77/month with reservation.
Storage costs $0.023 per GB-month for persistent volumes. Egress bandwidth costs $0.12 per GB. Inbound is free. Most applications stay under $20/month for storage. Bandwidth often dominates costs for API servers—optimize by compressing responses (saves 60–80% bandwidth) and enabling caching headers.
Reserved instances lock pricing for 1 or 3 years. 1-year reservations discount 25%. 3-year reservations discount 40%. Break-even on 1-year reservations: 6 months of usage. Ideal for baseline workloads that run continuously. Spot instances (temporary excess capacity) discount 60% but can terminate with 5-minute notice.
Cost monitoring dashboards show daily spending. Set budget alerts. Most teams reduce costs 20–30% after implementing monitoring. Common optimizations: eliminate idle containers ($50–200/month savings), compress database backups (40% size reduction), and batch background jobs (consolidate to fewer, larger tasks).
Common Issues and Troubleshooting
Container exits immediately. Most common cause: application crashes during startup. Check logs for errors. Verify your Dockerfile includes all dependencies. Test locally first with identical environment. If local works but RC on X fails, examine resource limits—insufficient memory kills processes silently.
Health check failures cause restart loops. Symptoms: container restarts every 45 seconds. Fix: verify your health check endpoint returns HTTP 200 quickly. Database connectivity issues cause this frequently. Add retry logic to startup. Increase health check timeout from 10 to 20 seconds if your app needs initialization time.
Network timeouts connecting to external APIs indicate egress problems. Rare on X platform (0.1% of containers affected). Workaround: use regional endpoint closer to your container's zone. Latency to external services: East region to AWS us-east-1 averages 8ms. Cross-region latency runs 40–80ms.
Storage permission errors happen when containers lack write access to volumes. Verify the container's user has correct permissions. By default, containers run as user 1000. Adjust ownership if needed: chown 1000:1000 /data in the Dockerfile. Storage encryption enabled by default—no performance penalty.
Comparing RC on X to Alternatives
RC on X versus AWS Fargate: X is 30% cheaper for small workloads (under 4 vCPU). AWS wins for enterprises needing advanced compliance and custom networking. Fargate includes VPC integration; X doesn't require it. Time to first deployment: X is 15 minutes, Fargate is 45 minutes.
versus DigitalOcean App Platform: DigitalOcean is simpler and $5/month cheaper. X has superior auto-scaling, faster deployments, and integrated monitoring. Both handle small apps equally well. X excels with 100+ concurrent containers. DigitalOcean better for learning.
versus Kubernetes (self-hosted): Kubernetes offers unlimited customization at massive operational cost. 2–3 DevOps engineers required. Kubernetes bills hourly regardless of utilization. RC on X bills per-second with no management overhead. Choose RC on X for startup-stage companies and small-to-medium teams. Choose Kubernetes when you need advanced networking, custom scheduling, or multi-cloud deployments.
versus Google Cloud Run: Run charges per request, not per hour. Cost advantage: if your application idles (request processing time is bursty). RC on X better for sustained load. Run startup time: 500ms–2 seconds. RC on X: 45–90 seconds. Use Run for APIs that scale to zero.