师资
张宏,加拿大工程院院士,IEEE Fellow, 广东省“珠江人才计划”领军人才,深圳市“孔雀计划”杰出人才,现任南方科技大学电子与电气工程系讲席教授, 《深圳市机器人视觉与导航重点实验室》主任 (https://rcvlab.eee.sustech.edu.cn)。曾就职加拿大阿尔伯塔大学计算机科学系多年,离任前为该系终身教授。在加拿大工作期间,完成了多项重大研发项目,担任加拿大自然科学与工程基金委首席工业研究教授 (NSERC IRC)。目前研究方向为移动机器人导航,自动驾驶,计算机视觉,图像处理。任多个国际期刊编委及大会主席, IEEE机器人与自动化协会旗舰会议IROS 编委会总主编 (2020-2022), 目前为IEEE机器人与自动化协会(RAS)行政委员会委员(2023-2025)。
教育经历
1982 美国 东北大学 电气工程 学士 (BSc)
1986 美国 普渡大学 电气工程 博士 (PhD)
1987 美国 宾夕法尼亚大学 计算机与信息科学 博士后 (PDF)
工作经历
2020.10 – 至 今 中国 南方科技大学电子与电气工程系讲席教授
2000.07 – 2020.10 加拿大 阿尔伯塔大学计算机科学系教授
2002.07 – 2003.04 新加坡 南洋理工大学高级访问研究员
1994.07 – 2000.06 加拿大 阿尔伯塔大学计算机科学系副教授
1994.07 – 1995.06 日本 机械研究所访问研究员
1988.01 – 1994.06 加拿大 阿尔伯塔大学计算机科学系助理教授
研究方向
移动机器人导航,视觉SLAM, 语义地图构建
具身智能,基于大模型的机器人导航与操作
计算机视觉,物体检测,物体跟踪,图像分割
偏振相机应用,HDR, 三维重构
(GS: https://scholar.google.ca/citations?user=J7UkpAIAAAAJ)
所获荣誉
2024 21st Conference on Robots and Vision (CRV) 最佳论文奖 in Robotics
2024 Asia-Pacific Artificial Intelligence Association (AAIA) Fellow
2024 广东省电子学会一等奖
2019 IEEE ROBIO 2019 最佳论文奖
2018 IEEE/RSJ IROS 杰出服务奖
2015 加拿大工程院院士
2014 IEEE Fellow (会士)
2003-17 加拿大自然科学与工程基金委 (NSERC) 首席工业研究教授
2008 阿尔伯塔省科学技术奖
2006 加拿大信息处理与模式识别协会年奖
2004 加拿大华人教授协会Member of the Year
2002 阿尔伯塔大学理学院最佳教师奖
2000 IEEE千禧奖
近期文章(2022 - )
[1] S. Elkerdawy, M. Elhoushi, H. Zhang, and N. Ray, ‘Fire together wire together: A dynamic pruning approach with self-supervised mask prediction’, in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022, pp. 12454–12463.
[2] X. Wang, H. Zhang, and G. Peng, ‘Evaluating and Optimizing Feature Combinations for Visual Loop Closure Detection’, Journal of Intelligent & Robotic Systems, vol. 104, no. 2, p. 31, 2022.
[3] I. Ali and H. Zhang, ‘Are we ready for robust and resilient slam? a framework for quantitative characterization of slam datasets’, in 2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2022, pp. 2810–2816.
[4] M. Shakeri and H. Zhang, ‘Highlight specular reflection separation based on tensor low-rank and sparse decomposition using polarimetric cues’, arXiv preprint arXiv:2207. 03543, 2022.
[5] I. Ali and H. Zhang, ‘Optimizing SLAM Evaluation Footprint Through Dynamic Range Coverage Analysis of Datasets’, in 2023 Seventh IEEE International Conference on Robotic Computing (IRC), 2023, pp. 127–134.
[6] S. An et al., ‘Deep tri-training for semi-supervised image segmentation’, IEEE Robotics and Automation Letters, vol. 7, no. 4, pp. 10097–10104, 2022.
[7] B. Yang, J. Li, Z. Shao, and H. Zhang, ‘Robust UWB indoor localization for NLOS scenes via learning spatial-temporal features’, IEEE Sensors Journal, vol. 22, no. 8, pp. 7990–8000, 2022.
[8] G. Chen, L. He, Y. Guan, and H. Zhang, ‘Perspective phase angle model for polarimetric 3d reconstruction’, in European Conference on Computer Vision, 2022, pp. 398–414.
[9] H. Ye, J. Zhao, Y. Pan, W. Chen, and H. Zhang, ‘Following Closely: A Robust Monocular Person Following System for Mobile Robot’, arXiv preprint arXiv:2204. 10540, 2022.
[10] R. Zhou, L. He, H. Zhang, X. Lin, and Y. Guan, ‘Ndd: A 3d point cloud descriptor based on normal distribution for loop closure detection’, in 2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2022, pp. 1328–1335.
[11] H. Ye, J. Zhao, Y. Pan, W. Cherr, L. He, and H. Zhang, ‘Robot person following under partial occlusion’, in 2023 IEEE International Conference on Robotics and Automation (ICRA), 2023, pp. 7591–7597.
[12] Y. Pan, L. He, Y. Guan, and H. Zhang, ‘An Experimental Study of Keypoint Descriptor Fusion’, in 2022 IEEE International Conference on Robotics and Biomimetics (ROBIO), 2022, pp. 699–704.
[13] B. Liu, Y. Fu, F. Lu, J. Cui, Y. Wu, and H. Zhang, ‘NPR: Nocturnal Place Recognition in Streets’, arXiv preprint arXiv:2304. 00276, 2023.
[14] H. Ye, W. Chen, J. Yu, L. He, Y. Guan, and H. Zhang, ‘Condition-invariant and compact visual place description by convolutional autoencoder’, Robotica, vol. 41, no. 6, pp. 1718–1732, 2023.
[15] C. Tang, D. Huang, L. Meng, W. Liu, and H. Zhang, ‘Task-oriented grasp prediction with visual-language inputs’, in 2023 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2023, pp. 4881–4888.
[16] C. Tang, D. Huang, W. Ge, W. Liu, and H. Zhang, ‘Graspgpt: Leveraging semantic knowledge from a large language model for task-oriented grasping’, IEEE Robotics and Automation Letters, 2023.
[17] C. Tang, J. Yu, W. Chen, B. Xia, and H. Zhang, ‘Relationship oriented semantic scene understanding for daily manipulation tasks’, in 2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2022, pp. 9926–9933.
[18] B. Yang, J. Li, Z. Shao, and H. Zhang, ‘Self-supervised deep location and ranging error correction for UWB localization’, IEEE Sensors Journal, vol. 23, no. 9, pp. 9549–9559, 2023.
[19] J. Ruan, L. He, Y. Guan, and H. Zhang, ‘Combining scene coordinate regression and absolute pose regression for visual relocalization’, in 2023 IEEE International Conference on Robotics and Automation (ICRA), 2023, pp. 11749–11755.
[20] X. Liu, S. Wen, and H. Zhang, ‘A real-time stereo visual-inertial SLAM system based on point-and-line features’, IEEE Transactions on Vehicular Technology, vol. 72, no. 5, pp. 5747–5758, 2023.
[21] K. Cai, W. Chen, C. Wang, H. Zhang, and M. Q.-H. Meng, ‘Curiosity-based robot navigation under uncertainty in crowded environments’, IEEE Robotics and Automation Letters, vol. 8, no. 2, pp. 800–807, 2022.
[22] X. Lin, J. Ruan, Y. Yang, L. He, Y. Guan, and H. Zhang, ‘Robust data association against detection deficiency for semantic SLAM’, IEEE Transactions on Automation Science and Engineering, vol. 21, no. 1, pp. 868–880, 2023.
[23] W. Chen et al., ‘Keyframe Selection with Information Occupancy Grid Model for Long-term Data Association’, in 2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2022, pp. 2786–2793.
[24] W. Yang, Y. Zhuang, D. Luo, W. Wang, and H. Zhang, ‘VI-HSO: Hybrid Sparse Monocular Visual-Inertial Odometry’, IEEE Robotics and Automation Letters, 2023.
[25] L. He and H. Zhang, ‘Large-scale graph sinkhorn distance approximation for resource-constrained devices’, IEEE Transactions on Consumer Electronics, 2023.
[26] W. Chen et al., ‘Cloud Learning-based Meets Edge Model-based: Robots Don’t Need to Build All the Submaps dItself’, IEEE Transactions on Vehicular Technology, 2023.
[27] W. Chen, C. Fu, and H. Zhang, ‘Rumination meets vslam: You don’t need to build all the submaps in realtime’, Authorea Preprints, 2023.
[28] Z. Tang, H. Ye, and H. Zhang, ‘Multi-Scale Point Octree Encoding Network for Point Cloud Based Place Recognition’, in 2023 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2023, pp. 9191–9197.
[29] L. He, W. Li, Y. Guan, and H. Zhang, ‘IGICP: Intensity and geometry enhanced LiDAR odometry’, IEEE Transactions on Intelligent Vehicles, 2023.
[30] L. He and H. Zhang, ‘Doubly stochastic distance clustering’, IEEE Transactions on Circuits and Systems for Video Technology, vol. 33, no. 11, pp. 6721–6732, 2023.
[31] S. Wen, P. Li, and H. Zhang, ‘Hybrid Cross-Transformer-KPConv for Point Cloud Segmentation’, IEEE Signal Processing Letters, 2023.
[32] J. Li et al., ‘Deep learning based defect detection algorithm for solar panels’, in 2023 WRC Symposium on Advanced Robotics and Automation (WRC SARA), 2023, pp. 438–443.
[33] B. Liu et al., ‘NocPlace: Nocturnal Visual Place Recognition Using Generative and Inherited Knowledge Transfer’, arXiv preprint arXiv:2402. 17159, 2024.
[34] S. Wen, X. Liu, Z. Wang, H. Zhang, Z. Zhang, and W. Tian, ‘An improved multi-object classification algorithm for visual SLAM under dynamic environment’, Intelligent Service Robotics, vol. 15, no. 1, pp. 39–55, 2022.
[35] W. Ge, C. Tang, and H. Zhang, ‘Commonsense Scene Graph-based Target Localization for Object Search’, arXiv preprint arXiv:2404. 00343, 2024.
[36] J. Zhao, H. Ye, Y. Zhan, and H. Zhang, ‘Human Orientation Estimation under Partial Observation’, arXiv preprint arXiv:2404. 14139, 2024.
[37] H. Tao, B. Liu, J. Cui, and H. Zhang, ‘A convolutional-transformer network for crack segmentation with boundary awareness’, in 2023 IEEE International Conference on Image Processing (ICIP), 2023, pp. 86–90.
[38] J. Yin, Y. Zhuang, F. Yan, Y.-J. Liu, and H. Zhang, ‘A Tightly-Coupled and Keyframe-Based Visual-Inertial-Lidar Odometry System for UGVs With Adaptive Sensor Reliability Evaluation’, IEEE Transactions on Systems, Man, and Cybernetics: Systems, 2024.
[39] I. Ali, B. Wan, and H. Zhang, ‘Prediction of SLAM ATE using an ensemble learning regression model and 1-D global pooling of data characterization’, arXiv preprint arXiv:2303. 00616, 2023.
[40] S. An et al., ‘An Open-Source Robotic Chinese Chess Player’, in 2023 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2023, pp. 6238–6245.
[41] X. Liu, S. Wen, J. Zhao, T. Z. Qiu, and H. Zhang, ‘Edge-Assisted Multi-Robot Visual-Inertial SLAM With Efficient Communication’, IEEE Transactions on Automation Science and Engineering, 2024.
[42] S. Ji et al., ‘A Point-to-distribution Degeneracy Detection Factor for LiDAR SLAM using Local Geometric Models’, in 2024 IEEE International Conference on Robotics and Automation (ICRA), 2024, pp. 12283–12289.
[43] D. Huang, C. Tang, and H. Zhang, ‘Efficient Object Rearrangement via Multi-view Fusion’, in 2024 IEEE International Conference on Robotics and Automation (ICRA), 2024, pp. 18193–18199.
[44] H. Ye, J. Zhao, Y. Zhan, W. Chen, L. He, and H. Zhang, ‘Person re-identification for robot person following with online continual learning’, IEEE Robotics and Automation Letters, 2024.
[45] W. Chen et al., ‘Cloud-edge Collaborative Submap-based VSLAM using Implicit Representation Transmission’, IEEE Transactions on Vehicular Technology, 2024.
[46] G. Zeng, B. Zeng, Q. Wei, H. Hu, and H. Zhang, ‘Visual Object Tracking with Mutual Affinity Aligned to Human Intuition’, IEEE Transactions on Multimedia, 2024.