Kernel Engineer

GW230
  • $250,000-$350,000
  • Mountain View, CA
  • Permanent

About the job


Kernel Engineer – AI Hardware - Hybrid | Mountain View, CA


Acceler8 Talent is seeking an experienced Kernel Engineer to join a well-funded startup based out of Mountain View whose hardware promises to drastically change the economics of AI compute for the largest and most demanding models.


Founded by engineers behind some of the industry’s most successful semiconductor and AI platforms, this company is building a next-generation hardware-software stack designed to push the limits of performance and efficiency for large-scale AI workloads.


As a Kernels Engineer, you will be responsible for designing and optimizing performance-critical kernels that interface directly with custom AI hardware. You will work closely with ML Research and Hardware Engineering teams, providing a programmer’s perspective on hardware architecture and ensuring tight integration across the software stack.


Responsibilities:

  • Design, implement, and optimize high-performance kernels that interface directly with custom AI hardware
  • Partner closely with ML Research and Hardware Engineering teams to translate algorithmic intent into efficient kernel implementations
  • Provide architectural feedback and guidance from a programmer’s perspective to influence hardware and system design decisions
  • Optimize kernels using techniques such as parallelism, SIMD/vectorization, low-level memory optimization, and instruction-level tuning
  • Support performance analysis, profiling, and debugging across kernels, runtime, and hardware


Requirements:

  • Bachelor’s degree in Computer Science or equivalent practical experience
  • Experience optimizing software for specialized or accelerator hardware, including techniques such as parallel programming, SIMD, low-level C/C++, assembly-level optimization, or GPU/CUDA programming
  • Proficiency in at least one of: Assembly, C, C++, Zig, or Rust
  • Strong understanding of performance bottlenecks across compute, memory, and data movement


Preferences:

  • Experience implementing kernels for ML workloads, including models such as Transformers
  • Familiarity with distributed and parallel execution models, including AllReduce, AllToAll, data parallelism, and tensor parallelism
  • Working knowledge of compiler fundamentals and how code is lowered, optimized, and executed on modern hardware


If you're interested in building the future of AI compute, apply here or reach out to me at ltomaszko@acceler8talent.com to discuss further.


Luke Tomaszko Senior Semiconductor & Chip Design Recruiter

Apply for this role