Min Wu
Postdoctoral Scholar, Computer Science
Professional Education
-
Doctor of Philosophy, University of Oxford, Computer Science (2022)
Current Research and Scholarly Interests
Responsible AI, AI safety, trustworthy AI, robustness, explainability and interpretability.
Formal methods, automated verification, verification of deep neural networks, formal explainable AI.
All Publications
-
Towards Efficient Verification of Quantized Neural Networks
ASSOC ADVANCEMENT ARTIFICIAL INTELLIGENCE. 2024: 21152-21160
View details for Web of Science ID 001239984900029
-
Parallel Verification for δ-Equivalence of Neural Network Quantization
SPRINGER INTERNATIONAL PUBLISHING AG. 2024: 78-99
View details for DOI 10.1007/978-3-031-65112-0_4
View details for Web of Science ID 001312952000004
-
Marabou 2.0: A Versatile Formal Analyzer of Neural Networks
SPRINGER INTERNATIONAL PUBLISHING AG. 2024: 249-264
View details for DOI 10.1007/978-3-031-65630-9_13
View details for Web of Science ID 001307890400013
- Convex Bounds on the Softmax Function with Applications to Robustness Verification Proceedings of The 26th International Conference on Artificial Intelligence and Statistics 2023: 6853-6878
-
<i>Soy</i>: An Efficient MILP Solver for Piecewise-Affine Systems
IEEE. 2023: 6281-6288
View details for DOI 10.1109/IROS55552.2023.10342011
View details for Web of Science ID 001136907800122
-
VERIX: Towards Verified Explainability of Deep Neural Networks
NEURAL INFORMATION PROCESSING SYSTEMS (NIPS). 2023
View details for Web of Science ID 001230083404006
-
Full Poincare polarimetry enabled through physical inference
OPTICA
2022; 9 (10): 1109-1114
View details for DOI 10.1364/OPTICA.452646
View details for Web of Science ID 000880667200002
-
A survey of safety and trustworthiness of deep neural networks: Verification, testing, adversarial attack and defence, and interpretability?
COMPUTER SCIENCE REVIEW
2020; 37
View details for DOI 10.1016/j.cosrev.2020.100270
View details for Web of Science ID 000559782300009
-
A game-based approximate verification of deep neural networks with provable guarantees
THEORETICAL COMPUTER SCIENCE
2020; 807: 298-329
View details for DOI 10.1016/j.tcs.2019.05.046
View details for Web of Science ID 000512219400020
-
Robustness Guarantees for Deep Neural Networks on Videos
IEEE. 2020: 308-317
View details for DOI 10.1109/CVPR42600.2020.00039
View details for Web of Science ID 000620679500032
-
Assessing Robustness of Text Classification through Maximal Safe Radius Computation
Findings of the Association for Computational Linguistics: EMNLP 2020
2020: 2949-2968
View details for DOI 10.18653/v1/2020.findings-emnlp.266
-
Gaze-based Intention Anticipation over Driving Manoeuvres in Semi-Autonomous Vehicles
IEEE. 2019: 6210-6216
View details for Web of Science ID 000544658404127
-
Global Robustness Evaluation of Deep Neural Networks with Provable Guarantees for the Hamming Distance
Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence
2019: 5944-5952
View details for DOI 10.24963/ijcai.2019/824
-
Concolic Testing for Deep Neural Networks
IEEE. 2018: 109-119
View details for DOI 10.1145/3238147.3238172
View details for Web of Science ID 000553784500014
-
Safety Verification of Deep Neural Networks
SPRINGER INTERNATIONAL PUBLISHING AG. 2017: 3-29
View details for DOI 10.1007/978-3-319-63387-9_1
View details for Web of Science ID 000432196400001