Julien Martel is a postdoctoral scholar in the Stanford Computational Imaging Lab. He received his Ph.D. from ETH Zurich in 2019 advised by Matthew Cook, at the Institute of Neuroinformatics. His research interests are in unconventional visual sensing and computing. More specifically, his current topics of research include the co-design of hardware and algorithms for visual sensing, the design of methods for vision sensors with in-pixel computing capabilities, and the use of novel neural representations to store and compute on visual data.

Professional Education

  • Doctor of Philosophy, Eidgenossische Technische Hochschule (ETH Zurich) (2019)
  • Master of Science, Inst National - Sciences Appliquees (2013)
  • Ingenieur, Inst National - Sciences Appliquees (2012)

Current Research and Scholarly Interests

My research interests lie in artificial vision and machine intelligence.
I design systems and algorithms using ``unconventional" vision sensors and processors to give machines some understanding of the world around them.

I am interested in novel sensors coupled with processing elements "in-pixel" that can produce other kinds of visual information than "conventional" cameras. I like the idea of designing vision systems down from the understanding of physics and the transduction of light in silicon up to the development of algorithmic frameworks that can exploit the peculiarities of these new vision devices.

Lab Affiliations

All Publications

  • Neural Sensors: Learning Pixel Exposures for HDR Imaging and Video Compressive Sensing With Programmable Sensors. IEEE transactions on pattern analysis and machine intelligence Martel, J. N., Muller, L. K., Carey, S. J., Dudek, P., Wetzstein, G. 2020; 42 (7): 1642–53


    Camera sensors rely on global or rolling shutter functions to expose an image. This fixed function approach severely limits the sensors' ability to capture high-dynamic-range (HDR) scenes and resolve high-speed dynamics. Spatially varying pixel exposures have been introduced as a powerful computational photography approach to optically encode irradiance on a sensor and computationally recover additional information of a scene, but existing approaches rely on heuristic coding schemes and bulky spatial light modulators to optically implement these exposure functions. Here, we introduce neural sensors as a methodology to optimize per-pixel shutter functions jointly with a differentiable image processing method, such as a neural network, in an end-to-end fashion. Moreover, we demonstrate how to leverage emerging programmable and re-configurable sensor-processors to implement the optimized exposure functions directly on the sensor. Our system takes specific limitations of the sensor into account to optimize physically feasible optical codes and we evaluate its performance for snapshot HDR and high-speed compressive imaging both in simulation and experimentally with real scenes.

    View details for DOI 10.1109/TPAMI.2020.2986944

    View details for PubMedID 32305899