ML Research Engineer - San Francisco/Remote
Acceler8 Talent is seeking a talented and driven ML Research Engineer to develop simulators of novel neuromorphic architectures and a training pipeline to implement common neural network architectures with the Neuromorphic Architecture Team. You'll get to work closely with the algorithms teams to understand algorithmic requirements and their implementation on a hardware level.
This company is backed by world leaders in artificial intelligence, semiconductors, and general technology like Sam Altman and Jeff Rothschild. They have developed a novel processor architecture inspired by the human brain to train large deep learning and neural network algorithms.
Based in Bay Area or open to fully remote.
Qualifications:
- Ph.D. degree in Electrical Engineering, emphasis on AI acceleration
- In-memory computing and NVM (RRAM, PCM) technology experience
- Good understanding of modern hardware architecture
- In-depth understanding of low-precision training and inference methodologies
- Expertise in training algorithms (backpropagation, SGD, second-order methods, etc.), deep learning architectures (Transformers, ResNets, DLRMs, MLP-mixers, etc.), and their underpinning math.
- Proficient coding skills with cloud GPU-accelerated PyTorch or other modern machine learning libraries such as TensorFlow or Caffe
Preferred Qualifications:
- Extensive knowledge of sparsity in deep learning (sparse weights, activations, gradients, attention, etc.)
- Published code portfolio
- Familiar with biologically plausible methods of learning (feedback alignment and equilibrium propagation)
- Deep knowledge in mixed-signal design concepts
