The $30B AI Infrastructure Boom: Careers at VAST Data, Firmus, and the Companies Powering AI
Everyone talks about AI applications — the chatbots, the copilots, the agents. But beneath every AI product is a massive infrastructure layer that's growing even faster. VAST Data just tripled its valuation to $30 billion. Firmus raised $505 million to build AI data centers. CoreWeave, Lambda Labs, and dozens of others are scaling at breakneck speed.
This infrastructure layer is the picks-and-shovels play of the AI gold rush. And it's hiring thousands of people who aren't AI researchers.
Why AI Infrastructure Is a Career Goldmine
The Math Is Simple
Every AI model needs:
- Compute: GPUs, TPUs, custom silicon
- Storage: Petabytes of training data, model checkpoints, inference caches
- Networking: High-bandwidth, low-latency connections between thousands of GPUs
- Cooling: These systems generate enormous heat
- Power: Data centers are now the largest consumers of electricity in many regions
The companies building this infrastructure have guaranteed demand for years. As long as AI models keep getting bigger (and they will), the infrastructure layer grows.
The Revenue Is Real
Unlike many AI application startups that are still searching for product-market fit, infrastructure companies have paying customers from day one. VAST Data, CoreWeave, and Lambda Labs are all generating hundreds of millions in annual revenue. This means job stability and growth.
The Key Players
VAST Data ($30B valuation, $1B Series F)
- What they do: AI-optimized data platform
- Backed by: Nvidia, Drive Capital, Access Industries
- Revenue: Hundreds of millions (growing 3x+ annually)
- Hiring for: Storage engineers, distributed systems engineers, sales engineers, customer success
- Why join: Category leader with massive growth trajectory
Firmus Technologies ($5.5B valuation, $505M raised)
- What they do: AI data centers with liquid immersion cooling
- Backed by: Coatue, Nvidia
- Hiring for: Data center engineers, electrical engineers, operations managers, project managers
- Why join: Solving the physical constraints of AI scaling
CoreWeave
- What they do: GPU cloud provider
- Valuation: $35B+
- Hiring for: Cloud engineers, infrastructure engineers, sales, finance
- Why join: One of the fastest-growing companies in history
Lambda Labs
- What they do: GPU cloud and AI infrastructure
- Hiring for: Systems engineers, ML infrastructure engineers, sales
- Why join: Developer-focused, strong engineering culture
Together AI
- What they do: AI inference and training platform
- Hiring for: ML systems engineers, distributed computing engineers
- Why join: At the intersection of infrastructure and AI research
Roles in AI Infrastructure
Engineering Roles
Distributed Systems Engineer
- Design and build systems that coordinate thousands of GPUs
- Skills: Go, Rust, C++, distributed consensus, networking
- Salary: $200K - $350K
Storage Engineer
- Build high-performance storage systems for AI workloads
- Skills: File systems, object storage, NVMe, RDMA
- Salary: $180K - $300K
Infrastructure/SRE
- Keep massive GPU clusters running reliably
- Skills: Kubernetes, monitoring, automation, Linux
- Salary: $170K - $280K
Network Engineer
- Design high-bandwidth networks for GPU clusters
- Skills: InfiniBand, RoCE, network topology, SDN
- Salary: $160K - $260K
Non-Engineering Roles
Data Center Operations Manager
- Oversee physical infrastructure operations
- Background: Facilities management, electrical engineering
- Salary: $130K - $200K
Sales Engineer / Solutions Architect
- Help customers design AI infrastructure solutions
- Background: Technical + customer-facing experience
- Salary: $150K - $250K + commission
Project Manager (Construction/Buildout)
- Manage data center construction projects
- Background: Construction management, project management
- Salary: $120K - $180K
Energy/Sustainability Manager
- Optimize power usage and renewable energy integration
- Background: Energy engineering, sustainability
- Salary: $130K - $200K
How to Position Yourself
Coming from Cloud/DevOps
Your skills are directly transferable. The main addition: understanding GPU workloads, model training patterns, and the specific challenges of AI infrastructure (memory bandwidth, interconnect topology, etc.).
Coming from Traditional Data Centers
The physical infrastructure knowledge is invaluable. Learn about:
- GPU server architectures
- Liquid cooling systems
- High-density power distribution
- AI workload patterns
Coming from Networking
Network engineering for AI clusters is one of the highest-demand roles. InfiniBand and RoCE experience is gold. If you don't have it, start learning — the concepts transfer from traditional networking.
New to the Field
Start with cloud certifications (AWS, GCP) and build toward:
- Understanding of GPU computing (CUDA basics)
- Kubernetes and container orchestration
- Linux systems administration
- Basic understanding of ML training workflows
The Long-Term Career Bet
AI infrastructure is a $100+ billion market that's growing 40%+ annually. The companies in this space will be the Amazons and Microsofts of the next decade. Getting in now — even in a junior role — positions you for extraordinary career growth as the industry scales.
Resources
- Explore AI Infrastructure Startups [blocked] — Browse companies on StartupJob
- Q1 2026 Funding Record [blocked] — The funding context
- Salary Calculator [blocked] — Benchmark infrastructure salaries
