冯朝路

Personal profile

个人简介

冯朝路,男,东北大学人工智能系,副教授,博士研究生导师,沈阳市领军人才,中国人工智能协会智慧医疗专业委员会委员、中国图象图形学学会生物医学图像专委会委员、辽宁省细胞生物学学会智能影像与细胞学研究专业委员会、女性盆底疾病与生殖整复及数字化技术专业委员会副主任委员,国际会议ICBEB组委会委员,Biomedical...

more+

基于元学习的行人重识别

    欢迎,第 计数器 位访问者(Since Mar 17, 2023)

    GCReID: Generalized continual person re-identification via meta learning and knowledge accumulation


    Background

    Person re-identification (ReID) has made good progress in stationary domains. The ReID model must be retrained to adapt to new scenarios (domains) as they emerge unexpectedly, which leads to catastrophic forgetting. Continual learning trains the model in the order of domain emergence to alleviate catastrophic forgetting.
    However, continual person ReID performs terribly if it is directly applied to unseen scenarios as shown in Fig. 1. That is to say, its generalization ability is limited due to distributional differences between domains participating in training and domains not. Domain adaptation and domain generalization are helpful for continual person ReID improving its performance on unseen domains. As shown in Fig. 2(a), the standard domain adaptation requires a subset of unseen domains to participate in training to guarantee a better adaptation on the unseen domains. Domain generalization requires multiple domains to participate in training to enhance generalization. That is to say, as shown in Fig. 2(b), source domains have to be accessible in advance at the same time.

    reid pic


  • Motivation


  • In most cases, scenarios change in order, i.e, source domains arrive with priority as shown in Fig. 2(c) rather than being accessible simultaneously. Continual domain generalization in Fig. 2(c) has to consider not only catastrophic forgetting under the continual learning paradigm (domains arrive with priority), but also the generalizability on unseen domains. Therefore, similar to resisting forgetting existing in a representative of continual learning, meta learning based parameter regularization has been introduced into this field to improve generalization of the model.

    Our Contributions


  • In this paper, we enhance generalization of continual person ReID for the aspect of sample diversity and distribution difference. Our main contributions are summarized as follows.

    • We simulate unseen domains according to priori to enhance sample diversity.

    • A fully connected graph is proposed to store accumulated knowledge learned from all seen domains and the simulated domains.

    • Meta-train is used to extract new knowledge from the current domain.

    • Meta-test is used to extract potential knowledge from unseen domains which is simulated according to priori.

    • The above knowledge are gathered together to update accumulated knowledge via the graph attention network.

    • We evaluate the proposed method and compare it with 6 representative methods on 12 benchmark datasets.

    reid pic
    reid pic

    Experiments

    We conduct experiments on 12 person ReID benchmarks, namely Market, CUHKSYSU, DukeMTMC-reID, MSMT17, CUHK03, Grid, SenseReID, CUHK01, CUHK02, VIPER, iLIDS, the mean average precision (mAP) and Rank1 are used to evaluate performance on the datasets. Datasets are downloaded from Torchreid_Dataset_Doc and DualNorm. We compare the proposed model GCReID with 7 methods, namely Fine-Tuning (FT), Learning without forgetting (LwF), continual representation learning(CRL), adaptive knowledge accumulation(AKA), continual knowledge preserving (CKP), generalizing without forgetting (GwF), and memory-based multi-source meta-learning (M3L).



    order

    order

    result1

    result1

    result2

    Sources  
    • More detials please see our paper.

    • The code is available at GCReID.

    • Citation: The author who uses this code is defaultly considered as agreeing to cite the following reference @article{liu2024gcreid, title={GCReID: Generalized continual person re-identification via meta learning and knowledge accumulation}, author={Liu, Zhaoshuo and Feng, Chaolu and Yu, Kun and Hu, Jun and Yang, Jinzhu}, journal={Neural Networks}, volume={179}, pages={106561}, year={2024}, publisher={Elsevier} } }



扫描查看移动版

访问量:     最后更新时间:--