Founding AI Engineer — Neocambrian
Name
*
Email
*
LinkedIn profile
*
Github Profile
*
Have you trained or fine-tuned a >0.5B model?
*
Have you written custom CUDA kernels or profiled GPU memory at any level?
Have you deployed a model to production with a hard latency or throughput SLA?
*
Have you used SGLang/VLLM, TensorRT/Triton or any other high performance inference engine?
*
Do you follow embodied AI / robotics research regularly? How many VLA papers have you read in the past six months?
*
Have you implemented something from a paper that had no public code?
*
Have you built a video annotation pipeline that ran at scale (or no scale), if so, did you use LLMs for that?
*
Are you currently based in or willing to relocate to Delhi NCR by April 2026?
*
Video generation models like Seedance are getting good enough to synthesise photorealistic egocentric footage. Whats' your take on what happens to this business model if synthetic video reaches the point where it's cheaper and more scalable than real-world collection.
*
RGB is cheap. Everything else like depth, IMU, tactile, audio, gaze, text -- adds cost, complexity, and collection overhead. What's your view on which modalities actually matter for manipulation tasks, and which are nice-to-have?
*
Scaling laws have held surprisingly well for language and vision. For physical AI, manipulation specifically, what's your intuition on data requirements?
*
Anything else that you may want to share with us or questions that you may have for us:
*
Submit
Made with
Binary