Bio


Onat is a Ph.D. student in Electrical Engineering at Stanford advised by Prof. Brian Hargreaves and Prof. Akshay Chaudhari. He explores the analysis and optimization of MRI reconstruction algorithms to mitigate noise and enhance image quality, combining theoretical insights with machine learning based approaches. Prior to that, he was a Master’s student in Electrical and Electronics Engineering at Bilkent University, Ankara, Turkey where he worked with Prof. Tolga Çukur in the National Magnetic Resonance Research Center (UMRAM) between 2020-2023. There, he leveraged novel deep learning and computer vision algorithms to devise state-of-the-art medical image synthesis and MRI reconstruction techniques. He received his B.S. degree in Electrical and Electronics Engineering from Bilkent University in 2020.

Current Research and Scholarly Interests


My research lies in the intersection of machine learning, computer vision, medical imaging, and healthcare. I leverage deep learning and computer vision algorithms to devise state-of-the-art biomedical imaging techniques that can address various challenges in multi-modal medical image synthesis and MRI reconstruction. My research aims to improve the resolution, contrast, and diversity of medical images, enhancing diagnostic information and patient comfort, decreasing examination costs and toxicity exposure, and facilitating multi-modal medical imaging even in low-resource settings. To achieve this goal, I focus on two aspects in deep learning for medical imaging: (1) devising novel deep architectures that can effectively capture the complex relationships between different modalities and generate realistic and diverse images, and (2) introducing novel and robust learning strategies that can overcome the challenges of data scarcity, domain shift, and mode collapse in medical image synthesis/reconstruction.

All Publications


  • BolT: Fused window transformers for fMRI time series analysis. Medical image analysis Bedel, H. A., Sivgin, I., Dalmaz, O., Dar, S. U., Çukur, T. 2023; 88: 102841

    Abstract

    Deep-learning models have enabled performance leaps in analysis of high-dimensional functional MRI (fMRI) data. Yet, many previous methods are suboptimally sensitive for contextual representations across diverse time scales. Here, we present BolT, a blood-oxygen-level-dependent transformer model, for analyzing multi-variate fMRI time series. BolT leverages a cascade of transformer encoders equipped with a novel fused window attention mechanism. Encoding is performed on temporally-overlapped windows within the time series to capture local representations. To integrate information temporally, cross-window attention is computed between base tokens in each window and fringe tokens from neighboring windows. To gradually transition from local to global representations, the extent of window overlap and thereby number of fringe tokens are progressively increased across the cascade. Finally, a novel cross-window regularization is employed to align high-level classification features across the time series. Comprehensive experiments on large-scale public datasets demonstrate the superior performance of BolT against state-of-the-art methods. Furthermore, explanatory analyses to identify landmark time points and regions that contribute most significantly to model decisions corroborate prominent neuroscientific findings in the literature.

    View details for DOI 10.1016/j.media.2023.102841

    View details for PubMedID 37224718

  • Unsupervised Medical Image Translation with Adversarial Diffusion Models. IEEE transactions on medical imaging Ozbey, M., Dalmaz, O., Dar, S. U., Bedel, H. A., Ozturk, S., Gungor, A., Cukur, T. 2023; PP

    Abstract

    Imputation of missing images via source-to-target modality translation can improve diversity in medical imaging protocols. A pervasive approach for synthesizing target images involves one-shot mapping through generative adversarial networks (GAN). Yet, GAN models that implicitly characterize the image distribution can suffer from limited sample fidelity. Here, we propose a novel method based on adversarial diffusion modeling, SynDiff, for improved performance in medical image translation. To capture a direct correlate of the image distribution, SynDiff leverages a conditional diffusion process that progressively maps noise and source images onto the target image. For fast and accurate image sampling during inference, large diffusion steps are taken with adversarial projections in the reverse diffusion direction. To enable training on unpaired datasets, a cycle-consistent architecture is devised with coupled diffusive and non-diffusive modules that bilaterally translate between two modalities. Extensive assessments are reported on the utility of SynDiff against competing GAN and diffusion models in multi-contrast MRI and MRI-CT translation. Our demonstrations indicate that SynDiff offers quantitatively and qualitatively superior performance against competing baselines.

    View details for DOI 10.1109/TMI.2023.3290149

    View details for PubMedID 37379177

  • User Feedback-based Online Learning for Intent Classification Gonc, K., Saglam, B., Dalmaz, O., Cukur, T., Kozat, S. S., Dibeklioglu, H., ACM ASSOC COMPUTING MACHINERY. 2023: 613-621
  • Semi-Supervised Learning of MRI Synthesis Without Fully-Sampled Ground Truths IEEE TRANSACTIONS ON MEDICAL IMAGING Yurt, M., Dalmaz, O., Dar, S., Ozbey, M., Tinaz, B., Oguz, K., Cukur, T. 2022; 41 (12): 3895-3906

    Abstract

    Learning-based translation between MRI contrasts involves supervised deep models trained using high-quality source- and target-contrast images derived from fully-sampled acquisitions, which might be difficult to collect under limitations on scan costs or time. To facilitate curation of training sets, here we introduce the first semi-supervised model for MRI contrast translation (ssGAN) that can be trained directly using undersampled k-space data. To enable semi-supervised learning on undersampled data, ssGAN introduces novel multi-coil losses in image, k-space, and adversarial domains. The multi-coil losses are selectively enforced on acquired k-space samples unlike traditional losses in single-coil synthesis models. Comprehensive experiments on retrospectively undersampled multi-contrast brain MRI datasets are provided. Our results demonstrate that ssGAN yields on par performance to a supervised model, while outperforming single-coil models trained on coil-combined magnitude images. It also outperforms cascaded reconstruction-synthesis models where a supervised synthesis model is trained following self-supervised reconstruction of undersampled data. Thus, ssGAN holds great promise to improve the feasibility of learning-based multi-contrast MRI synthesis.

    View details for DOI 10.1109/TMI.2022.3199155

    View details for Web of Science ID 000907324600035

    View details for PubMedID 35969576

  • ResViT: Residual Vision Transformers for Multimodal Medical Image Synthesis. IEEE transactions on medical imaging Dalmaz, O., Yurt, M., Cukur, T. 2022; 41 (10): 2598-2614

    Abstract

    Generative adversarial models with convolutional neural network (CNN) backbones have recently been established as state-of-the-art in numerous medical image synthesis tasks. However, CNNs are designed to perform local processing with compact filters, and this inductive bias compromises learning of contextual features. Here, we propose a novel generative adversarial approach for medical image synthesis, ResViT, that leverages the contextual sensitivity of vision transformers along with the precision of convolution operators and realism of adversarial learning. ResViT's generator employs a central bottleneck comprising novel aggregated residual transformer (ART) blocks that synergistically combine residual convolutional and transformer modules. Residual connections in ART blocks promote diversity in captured representations, while a channel compression module distills task-relevant information. A weight sharing strategy is introduced among ART blocks to mitigate computational burden. A unified implementation is introduced to avoid the need to rebuild separate synthesis models for varying source-target modality configurations. Comprehensive demonstrations are performed for synthesizing missing sequences in multi-contrast MRI, and CT images from MRI. Our results indicate superiority of ResViT against competing CNN- and transformer-based methods in terms of qualitative observations and quantitative metrics.

    View details for DOI 10.1109/TMI.2022.3167808

    View details for PubMedID 35436184

  • Detecting COVID-19 from Respiratory Sound Recordings with Transformers Aytekin, I., Dalmaz, O., Ankishan, H., Saritas, E. U., Bagci, U., Cukur, T., Celik, H., Drukker, K., Iftekharuddin, K. M. SPIE-INT SOC OPTICAL ENGINEERING. 2022

    View details for DOI 10.1117/12.2611490

    View details for Web of Science ID 000838048600005