陈春林
近期热点
资料介绍
个人简历
教育及工作经历:2001年于中国科学技术大学自动化系和管理科学系获工学学士和管理学第二学士学位,2006年于中国科学技术大学自动化系获工学博士学位,2006年7月起加入南京大学工程管理学院任教,曾于2012年至2013年在美国普林斯顿大学做访问学者,并多次赴澳大利亚新南威尔士大学、香港城市大学等做短期访问研究。目前为南京大学工程管理学院教授、博士生导师、控制与系统工程系主任、机器学习与智能决策中心主任。已在IEEE Transactions系列、中国科学等国际、国内一流期刊发表学术论文60余篇,申请国家发明专利20余项,2008年入选南京大学青年骨干教师培养计划,2013年入选江苏省“333高层次人才培养工程”中青年科学技术带头人。教学及人才培养:先后承担《自动控制原理》、《自动化导论》、《智能控制与机器人学》、《工程矩阵论》、《机器感知与智能控制》等本科生、研究生课程;近年来指导本科生、研究生获得国际及国家级竞赛奖项20余项,先后共指导研究生50人(其中3人次获南京大学栋梁特等奖、9人次获国家奖学金、1人次获评南京大学学生年度人物)。科研(教学)奖励2019年度南京大学郑钢基金—学业导师优秀示范奖2019年度南京大学“双创之星”教师2018年度南京大学社会实践优秀指导教师2017年度江苏省教学成果奖(高等教育类)二等奖:“以优化决策能力为特色的复合创新型管理人才培养——模式与实践”2016年度中国指挥与控制学会科学技术进步奖二等奖:“融合不同智能特性及能力的指挥与控制决策问题求解理论与技术”2014年度南京大学工程管理学院协鑫奖教金2013年度高等学校科学研究优秀成果奖(科学技术)自然科学奖二等奖:“机器学习理论及其在复杂系统分析与控制中的应用”2011年度江苏省高教学会第十次高等教育科学研究成果奖三等奖:“自动化本科专业立体化教学改革与实践”2010年度首批南京大学石林集团奖教金;2008年度南京大学优秀教师;2008年度南京大学优秀多媒体教学课件二等奖。专著出版《自动化导论》(第二版,副主编),科学出版社,北京,2014年8月.先后主持国家自然科学基金项目及企业项目等10余项,作为主要完成者参与其他国家重点专项、国家自然科学基金及企业横向课题10余项。荣誉称号江苏省第4期“333工程”第三层次(中青年科学技术带头人)南京大学第3届“双创之星(教师)”中国仿真学会优秀科技工作者(2020年度)申请国家发明专利20项,其中已获授权9项。研究领域
机器学习与随机优化,及其在复杂系统管理与控制中的应用,包括:强化学习、智能无人系统、量子控制与量子人工智能等""近期论文
[34] DeepReinforcement Learning for Multi-contact Motion Planning of Hexapod Robots, the 30th International Joint Conference on Artificial Intelligence (IJCAI), Montreal-themed virtual reality, August 21-26, 2021.[33] Rule-BasedReinforcement Learning for Efficient Robot Navigation with Space Reduction. IEEE/ASMETransactions on Mechatronics, Doi: 10.1109/TMECH.2021.3072675, 2021.[32] LifelongIncremental Reinforcement Learning with Online Bayesian Inference. IEEETransactions on Neural Networks and Learning Systems,10.1109/TNNLS.2021.3055499, 2021.[31] DeepReinforcement Learning with Quantum-inspired Experience Replay. IEEETransactions on Cybernetics, Doi: 10.1109/TCYB.2021.3053414, 2021.[30] Locality-ConstrainedDiscriminative Matrix Regression for Robust Face Identification.IEEE Transactions on Neural Networks and Learning Systems, Doi:10.1109/TNNLS.2020.3041636,2020.[29] Enhanced Group Sparse Regularized NonconvexRegression for Face Recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence, Doi: 10.1109/TPAMI.2020.3033994, 2020.[28] Pairwise Relations Oriented Discriminative Regression. IEEETransactions on Circuits and Systems for Video Technology, Doi:10.1109/TCSVT.2020.3032964, 2020.[27] Nonnegative representation based discriminantprojection for face recognition. International Journal of Machine Learningand Cybernetics, Doi: 10.1007/s13042-020-01199-z,12: 733-745, 2021.[26] A multi-timescale framework for state monitoring and lifetimeprognosis of lithium-ion batteries. Energy, Doi:10.1016/j.energy.2021.120684, 229: 120684, 2021.[25] Realization of a quantum autoencoder forlossless compression of quantum data. Physical Review A, (102) 032412, 2020.[24] Incremental Reinforcement Learning in Continuous Spaces via Policy Relaxation and Importance Weighting. IEEE Transactions on Neural Networks and Learning Systems, 31(6): 1870-1883, 2020.[23] Learning-based Quantum Robust Control: Algorithm, Applications and Experiments. IEEE Transactions on Cybernetics, 50(8): 3581-3593, 2020.[22] Reinforcement Learning Based Optimal Sensor Placement for Spatiotemporal Modeling. IEEE Transactions on Cybernetics, 50(6): 2861-2871, 2020.[21] Incremental Reinforcement Learning with Prioritized Sweeping for Dynamic Environments. IEEE/ASME Transactions on Mechatronics, 24(2): 621-632, 2019.[20] Self-paced prioritized curriculum learning with coverage penalty in deep reinforcement learning. IEEE Transactions on Neural Networks and Learning Systems, 29(6): 2216-2226, 2018.[19] Robust learning control design for quantum unitary transformations. IEEE Transactions on Cybernetics, 47(12): 4405-4417, 2017.[18] Quantum Learning Control Using Differential Evolution with Equally-mixed Strategies, Control Theory and Technology, 15(3): 226-241, 2017.[17] Multi-agent Reinforcement Learning with Sparse Interactions by Negotiation and Knowledge Transfer. IEEE Transactions on Cybernetics, 47(5): 1238-1250, 2017.[16] Quantum Ensemble Classification: A Sampling-based Learning Control Approach. IEEE Transactions on Neural Networks and Learning Systems, 28(6): 1345-1359, 2017.[15] Learning robust pulses for generating universal quantum gates, Scientific Reports, 6: 36090, 2016.[14] Robust manipulation of superconducting qubits in the presence of fluctuations, Scientific Reports, 5: 7873, 2015.[13] Sampling-based learning control for quantum systems with uncertainties, IEEE Transactions on Control Systems Technology, 23(6): 2155-2166, 2015.[12] Fidelity-based Probabilistic Q-learning for Control of Quantum Systems. IEEE Transactions on Neural Networks and Learning Systems, 25(5): 920-933, 2014.[11] Sampling-based Learning Control of Inhomogeneous Quantum Ensembles. Physical Review A, 89: 023402, 2014.[10] Sampling-based Learning Control of Quantum Systems via Path Planning. IET Control Theory and Applications, 8(15): 1513-1522, 2014.[9] Further results on sampled-data design for robust control of a single qubit, International Journal of Control, 87(10): 2056-2064, 2014.[8] Control Design of Uncertain Quantum Systems with Fuzzy Estimators. IEEE Transactions on Fuzzy Systems, 20(5): 820-831, 2012.[7] Robust Quantum-Inspired Reinforcement Learning for Robot Navigation. IEEE-ASME Transactions on Mechatronics, 17(1): 86-97, 2012.[6] Probabilistic Fuzzy System for Uncertain Localization and Map-Building of Mobile Robots. IEEE Transactions on Instrumentation and Measurement, 61(6): 1546-1560, 2012.[5] Hybrid MDP Based Integrated Hierarchical Q-learning. Science China Information Sciences, 54(11): 2279-2294, 2011.[4] Incoherent Control of Quantum Systems with Wavefunction Controllable Subspaces via Quantum Reinforcement Learning. IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics, 38(4): 957-962, 2008.[3] Quantum Reinforcement Learning. IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics, 38(5): 1207-1220, 2008.[2] Hybrid Control for Robot Navigation - A Hierarchical Q-Learning Algorithm. IEEE Robotics & Automation Magazine, 15(2): 37-47, 2008.[1] Quantum Computation for Action Selection Using Reinforcement Learning. International Journal of Quantum Information, 4(6): 1071-1083, 2006.南京大学青年学者联谊会理事、秘书长中国自动化学会系统仿真专委会副主任委员中国自动化学会ADPRL专委会委员中国自动化学会青年工作委员会委员中国人工智能学会机器学习专委会委员中国指挥与控制学会青年工作委员会委员江苏省自动化学会理事江苏省系统工程学会理事Chair of the Technical Committee on Quantum Cybernetics for IEEE SMC SocietyScientific Reports等期刊编委 相关热点
最新收录
- 椛岛光 06-01
- 星乃梦奈(ほしの ゆな 06-01
- 沖田奈奈 沖田奈々(Nana 06-01
- 藤仁依那 藤にいな(Niina 06-01
- 矢田步美 矢田あゆみ(Ay 06-01
- 妹岳夏目 妹岳なつめ(Na 06-01
- 麻仓香穗里 麻仓かほり 06-01
- 立花广美 (立花ひろみ H 05-31
- 里中瑞穗(里中みずほ Mi 05-31
- 结月莉亚(結月りあ) 05-31