RRFRRF2 commited on
Commit
6790335
·
1 Parent(s): 4f5d27e

feat:add cifar10mini just 1000 nodes

Browse files
.gitignore CHANGED
@@ -3,5 +3,5 @@ _pycache_
3
  model-CIFAR10
4
 
5
  #cifar10
6
- cifar-10-batches-py
7
- cifar-10-python.tar.gz
 
3
  model-CIFAR10
4
 
5
  #cifar10
6
+ **/cifar-10-batches-py/
7
+ **/cifar-10-python.tar.gz
ResNet-CIFAR10/Classification-mini/dataset/index.json ADDED
@@ -0,0 +1,1006 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "train": [
3
+ 0,
4
+ 1,
5
+ 2,
6
+ 3,
7
+ 4,
8
+ 5,
9
+ 6,
10
+ 7,
11
+ 8,
12
+ 9,
13
+ 10,
14
+ 11,
15
+ 12,
16
+ 13,
17
+ 14,
18
+ 15,
19
+ 16,
20
+ 17,
21
+ 18,
22
+ 19,
23
+ 20,
24
+ 21,
25
+ 22,
26
+ 23,
27
+ 24,
28
+ 25,
29
+ 26,
30
+ 27,
31
+ 28,
32
+ 29,
33
+ 30,
34
+ 31,
35
+ 32,
36
+ 33,
37
+ 34,
38
+ 35,
39
+ 36,
40
+ 37,
41
+ 38,
42
+ 39,
43
+ 40,
44
+ 41,
45
+ 42,
46
+ 43,
47
+ 44,
48
+ 45,
49
+ 46,
50
+ 47,
51
+ 48,
52
+ 49,
53
+ 50,
54
+ 51,
55
+ 52,
56
+ 53,
57
+ 54,
58
+ 55,
59
+ 56,
60
+ 57,
61
+ 58,
62
+ 59,
63
+ 60,
64
+ 61,
65
+ 62,
66
+ 63,
67
+ 64,
68
+ 65,
69
+ 66,
70
+ 67,
71
+ 68,
72
+ 69,
73
+ 70,
74
+ 71,
75
+ 72,
76
+ 73,
77
+ 74,
78
+ 75,
79
+ 76,
80
+ 77,
81
+ 78,
82
+ 79,
83
+ 80,
84
+ 81,
85
+ 82,
86
+ 83,
87
+ 84,
88
+ 85,
89
+ 86,
90
+ 87,
91
+ 88,
92
+ 89,
93
+ 90,
94
+ 91,
95
+ 92,
96
+ 93,
97
+ 94,
98
+ 95,
99
+ 96,
100
+ 97,
101
+ 98,
102
+ 99,
103
+ 100,
104
+ 101,
105
+ 102,
106
+ 103,
107
+ 104,
108
+ 105,
109
+ 106,
110
+ 107,
111
+ 108,
112
+ 109,
113
+ 110,
114
+ 111,
115
+ 112,
116
+ 113,
117
+ 114,
118
+ 115,
119
+ 116,
120
+ 117,
121
+ 118,
122
+ 119,
123
+ 120,
124
+ 121,
125
+ 122,
126
+ 123,
127
+ 124,
128
+ 125,
129
+ 126,
130
+ 127,
131
+ 128,
132
+ 129,
133
+ 130,
134
+ 131,
135
+ 132,
136
+ 133,
137
+ 134,
138
+ 135,
139
+ 136,
140
+ 137,
141
+ 138,
142
+ 139,
143
+ 140,
144
+ 141,
145
+ 142,
146
+ 143,
147
+ 144,
148
+ 145,
149
+ 146,
150
+ 147,
151
+ 148,
152
+ 149,
153
+ 150,
154
+ 151,
155
+ 152,
156
+ 153,
157
+ 154,
158
+ 155,
159
+ 156,
160
+ 157,
161
+ 158,
162
+ 159,
163
+ 160,
164
+ 161,
165
+ 162,
166
+ 163,
167
+ 164,
168
+ 165,
169
+ 166,
170
+ 167,
171
+ 168,
172
+ 169,
173
+ 170,
174
+ 171,
175
+ 172,
176
+ 173,
177
+ 174,
178
+ 175,
179
+ 176,
180
+ 177,
181
+ 178,
182
+ 179,
183
+ 180,
184
+ 181,
185
+ 182,
186
+ 183,
187
+ 184,
188
+ 185,
189
+ 186,
190
+ 187,
191
+ 188,
192
+ 189,
193
+ 190,
194
+ 191,
195
+ 192,
196
+ 193,
197
+ 194,
198
+ 195,
199
+ 196,
200
+ 197,
201
+ 198,
202
+ 199,
203
+ 200,
204
+ 201,
205
+ 202,
206
+ 203,
207
+ 204,
208
+ 205,
209
+ 206,
210
+ 207,
211
+ 208,
212
+ 209,
213
+ 210,
214
+ 211,
215
+ 212,
216
+ 213,
217
+ 214,
218
+ 215,
219
+ 216,
220
+ 217,
221
+ 218,
222
+ 219,
223
+ 220,
224
+ 221,
225
+ 222,
226
+ 223,
227
+ 224,
228
+ 225,
229
+ 226,
230
+ 227,
231
+ 228,
232
+ 229,
233
+ 230,
234
+ 231,
235
+ 232,
236
+ 233,
237
+ 234,
238
+ 235,
239
+ 236,
240
+ 237,
241
+ 238,
242
+ 239,
243
+ 240,
244
+ 241,
245
+ 242,
246
+ 243,
247
+ 244,
248
+ 245,
249
+ 246,
250
+ 247,
251
+ 248,
252
+ 249,
253
+ 250,
254
+ 251,
255
+ 252,
256
+ 253,
257
+ 254,
258
+ 255,
259
+ 256,
260
+ 257,
261
+ 258,
262
+ 259,
263
+ 260,
264
+ 261,
265
+ 262,
266
+ 263,
267
+ 264,
268
+ 265,
269
+ 266,
270
+ 267,
271
+ 268,
272
+ 269,
273
+ 270,
274
+ 271,
275
+ 272,
276
+ 273,
277
+ 274,
278
+ 275,
279
+ 276,
280
+ 277,
281
+ 278,
282
+ 279,
283
+ 280,
284
+ 281,
285
+ 282,
286
+ 283,
287
+ 284,
288
+ 285,
289
+ 286,
290
+ 287,
291
+ 288,
292
+ 289,
293
+ 290,
294
+ 291,
295
+ 292,
296
+ 293,
297
+ 294,
298
+ 295,
299
+ 296,
300
+ 297,
301
+ 298,
302
+ 299,
303
+ 300,
304
+ 301,
305
+ 302,
306
+ 303,
307
+ 304,
308
+ 305,
309
+ 306,
310
+ 307,
311
+ 308,
312
+ 309,
313
+ 310,
314
+ 311,
315
+ 312,
316
+ 313,
317
+ 314,
318
+ 315,
319
+ 316,
320
+ 317,
321
+ 318,
322
+ 319,
323
+ 320,
324
+ 321,
325
+ 322,
326
+ 323,
327
+ 324,
328
+ 325,
329
+ 326,
330
+ 327,
331
+ 328,
332
+ 329,
333
+ 330,
334
+ 331,
335
+ 332,
336
+ 333,
337
+ 334,
338
+ 335,
339
+ 336,
340
+ 337,
341
+ 338,
342
+ 339,
343
+ 340,
344
+ 341,
345
+ 342,
346
+ 343,
347
+ 344,
348
+ 345,
349
+ 346,
350
+ 347,
351
+ 348,
352
+ 349,
353
+ 350,
354
+ 351,
355
+ 352,
356
+ 353,
357
+ 354,
358
+ 355,
359
+ 356,
360
+ 357,
361
+ 358,
362
+ 359,
363
+ 360,
364
+ 361,
365
+ 362,
366
+ 363,
367
+ 364,
368
+ 365,
369
+ 366,
370
+ 367,
371
+ 368,
372
+ 369,
373
+ 370,
374
+ 371,
375
+ 372,
376
+ 373,
377
+ 374,
378
+ 375,
379
+ 376,
380
+ 377,
381
+ 378,
382
+ 379,
383
+ 380,
384
+ 381,
385
+ 382,
386
+ 383,
387
+ 384,
388
+ 385,
389
+ 386,
390
+ 387,
391
+ 388,
392
+ 389,
393
+ 390,
394
+ 391,
395
+ 392,
396
+ 393,
397
+ 394,
398
+ 395,
399
+ 396,
400
+ 397,
401
+ 398,
402
+ 399,
403
+ 400,
404
+ 401,
405
+ 402,
406
+ 403,
407
+ 404,
408
+ 405,
409
+ 406,
410
+ 407,
411
+ 408,
412
+ 409,
413
+ 410,
414
+ 411,
415
+ 412,
416
+ 413,
417
+ 414,
418
+ 415,
419
+ 416,
420
+ 417,
421
+ 418,
422
+ 419,
423
+ 420,
424
+ 421,
425
+ 422,
426
+ 423,
427
+ 424,
428
+ 425,
429
+ 426,
430
+ 427,
431
+ 428,
432
+ 429,
433
+ 430,
434
+ 431,
435
+ 432,
436
+ 433,
437
+ 434,
438
+ 435,
439
+ 436,
440
+ 437,
441
+ 438,
442
+ 439,
443
+ 440,
444
+ 441,
445
+ 442,
446
+ 443,
447
+ 444,
448
+ 445,
449
+ 446,
450
+ 447,
451
+ 448,
452
+ 449,
453
+ 450,
454
+ 451,
455
+ 452,
456
+ 453,
457
+ 454,
458
+ 455,
459
+ 456,
460
+ 457,
461
+ 458,
462
+ 459,
463
+ 460,
464
+ 461,
465
+ 462,
466
+ 463,
467
+ 464,
468
+ 465,
469
+ 466,
470
+ 467,
471
+ 468,
472
+ 469,
473
+ 470,
474
+ 471,
475
+ 472,
476
+ 473,
477
+ 474,
478
+ 475,
479
+ 476,
480
+ 477,
481
+ 478,
482
+ 479,
483
+ 480,
484
+ 481,
485
+ 482,
486
+ 483,
487
+ 484,
488
+ 485,
489
+ 486,
490
+ 487,
491
+ 488,
492
+ 489,
493
+ 490,
494
+ 491,
495
+ 492,
496
+ 493,
497
+ 494,
498
+ 495,
499
+ 496,
500
+ 497,
501
+ 498,
502
+ 499,
503
+ 500,
504
+ 501,
505
+ 502,
506
+ 503,
507
+ 504,
508
+ 505,
509
+ 506,
510
+ 507,
511
+ 508,
512
+ 509,
513
+ 510,
514
+ 511,
515
+ 512,
516
+ 513,
517
+ 514,
518
+ 515,
519
+ 516,
520
+ 517,
521
+ 518,
522
+ 519,
523
+ 520,
524
+ 521,
525
+ 522,
526
+ 523,
527
+ 524,
528
+ 525,
529
+ 526,
530
+ 527,
531
+ 528,
532
+ 529,
533
+ 530,
534
+ 531,
535
+ 532,
536
+ 533,
537
+ 534,
538
+ 535,
539
+ 536,
540
+ 537,
541
+ 538,
542
+ 539,
543
+ 540,
544
+ 541,
545
+ 542,
546
+ 543,
547
+ 544,
548
+ 545,
549
+ 546,
550
+ 547,
551
+ 548,
552
+ 549,
553
+ 550,
554
+ 551,
555
+ 552,
556
+ 553,
557
+ 554,
558
+ 555,
559
+ 556,
560
+ 557,
561
+ 558,
562
+ 559,
563
+ 560,
564
+ 561,
565
+ 562,
566
+ 563,
567
+ 564,
568
+ 565,
569
+ 566,
570
+ 567,
571
+ 568,
572
+ 569,
573
+ 570,
574
+ 571,
575
+ 572,
576
+ 573,
577
+ 574,
578
+ 575,
579
+ 576,
580
+ 577,
581
+ 578,
582
+ 579,
583
+ 580,
584
+ 581,
585
+ 582,
586
+ 583,
587
+ 584,
588
+ 585,
589
+ 586,
590
+ 587,
591
+ 588,
592
+ 589,
593
+ 590,
594
+ 591,
595
+ 592,
596
+ 593,
597
+ 594,
598
+ 595,
599
+ 596,
600
+ 597,
601
+ 598,
602
+ 599,
603
+ 600,
604
+ 601,
605
+ 602,
606
+ 603,
607
+ 604,
608
+ 605,
609
+ 606,
610
+ 607,
611
+ 608,
612
+ 609,
613
+ 610,
614
+ 611,
615
+ 612,
616
+ 613,
617
+ 614,
618
+ 615,
619
+ 616,
620
+ 617,
621
+ 618,
622
+ 619,
623
+ 620,
624
+ 621,
625
+ 622,
626
+ 623,
627
+ 624,
628
+ 625,
629
+ 626,
630
+ 627,
631
+ 628,
632
+ 629,
633
+ 630,
634
+ 631,
635
+ 632,
636
+ 633,
637
+ 634,
638
+ 635,
639
+ 636,
640
+ 637,
641
+ 638,
642
+ 639,
643
+ 640,
644
+ 641,
645
+ 642,
646
+ 643,
647
+ 644,
648
+ 645,
649
+ 646,
650
+ 647,
651
+ 648,
652
+ 649,
653
+ 650,
654
+ 651,
655
+ 652,
656
+ 653,
657
+ 654,
658
+ 655,
659
+ 656,
660
+ 657,
661
+ 658,
662
+ 659,
663
+ 660,
664
+ 661,
665
+ 662,
666
+ 663,
667
+ 664,
668
+ 665,
669
+ 666,
670
+ 667,
671
+ 668,
672
+ 669,
673
+ 670,
674
+ 671,
675
+ 672,
676
+ 673,
677
+ 674,
678
+ 675,
679
+ 676,
680
+ 677,
681
+ 678,
682
+ 679,
683
+ 680,
684
+ 681,
685
+ 682,
686
+ 683,
687
+ 684,
688
+ 685,
689
+ 686,
690
+ 687,
691
+ 688,
692
+ 689,
693
+ 690,
694
+ 691,
695
+ 692,
696
+ 693,
697
+ 694,
698
+ 695,
699
+ 696,
700
+ 697,
701
+ 698,
702
+ 699,
703
+ 700,
704
+ 701,
705
+ 702,
706
+ 703,
707
+ 704,
708
+ 705,
709
+ 706,
710
+ 707,
711
+ 708,
712
+ 709,
713
+ 710,
714
+ 711,
715
+ 712,
716
+ 713,
717
+ 714,
718
+ 715,
719
+ 716,
720
+ 717,
721
+ 718,
722
+ 719,
723
+ 720,
724
+ 721,
725
+ 722,
726
+ 723,
727
+ 724,
728
+ 725,
729
+ 726,
730
+ 727,
731
+ 728,
732
+ 729,
733
+ 730,
734
+ 731,
735
+ 732,
736
+ 733,
737
+ 734,
738
+ 735,
739
+ 736,
740
+ 737,
741
+ 738,
742
+ 739,
743
+ 740,
744
+ 741,
745
+ 742,
746
+ 743,
747
+ 744,
748
+ 745,
749
+ 746,
750
+ 747,
751
+ 748,
752
+ 749,
753
+ 750,
754
+ 751,
755
+ 752,
756
+ 753,
757
+ 754,
758
+ 755,
759
+ 756,
760
+ 757,
761
+ 758,
762
+ 759,
763
+ 760,
764
+ 761,
765
+ 762,
766
+ 763,
767
+ 764,
768
+ 765,
769
+ 766,
770
+ 767,
771
+ 768,
772
+ 769,
773
+ 770,
774
+ 771,
775
+ 772,
776
+ 773,
777
+ 774,
778
+ 775,
779
+ 776,
780
+ 777,
781
+ 778,
782
+ 779,
783
+ 780,
784
+ 781,
785
+ 782,
786
+ 783,
787
+ 784,
788
+ 785,
789
+ 786,
790
+ 787,
791
+ 788,
792
+ 789,
793
+ 790,
794
+ 791,
795
+ 792,
796
+ 793,
797
+ 794,
798
+ 795,
799
+ 796,
800
+ 797,
801
+ 798,
802
+ 799,
803
+ 800,
804
+ 801,
805
+ 802,
806
+ 803,
807
+ 804,
808
+ 805,
809
+ 806,
810
+ 807,
811
+ 808,
812
+ 809,
813
+ 810,
814
+ 811,
815
+ 812,
816
+ 813,
817
+ 814,
818
+ 815,
819
+ 816,
820
+ 817,
821
+ 818,
822
+ 819,
823
+ 820,
824
+ 821,
825
+ 822,
826
+ 823,
827
+ 824,
828
+ 825,
829
+ 826,
830
+ 827,
831
+ 828,
832
+ 829,
833
+ 830,
834
+ 831,
835
+ 832,
836
+ 833,
837
+ 834,
838
+ 835,
839
+ 836,
840
+ 837,
841
+ 838,
842
+ 839,
843
+ 840,
844
+ 841,
845
+ 842,
846
+ 843,
847
+ 844,
848
+ 845,
849
+ 846,
850
+ 847,
851
+ 848,
852
+ 849,
853
+ 850,
854
+ 851,
855
+ 852,
856
+ 853,
857
+ 854,
858
+ 855,
859
+ 856,
860
+ 857,
861
+ 858,
862
+ 859,
863
+ 860,
864
+ 861,
865
+ 862,
866
+ 863,
867
+ 864,
868
+ 865,
869
+ 866,
870
+ 867,
871
+ 868,
872
+ 869,
873
+ 870,
874
+ 871,
875
+ 872,
876
+ 873,
877
+ 874,
878
+ 875,
879
+ 876,
880
+ 877,
881
+ 878,
882
+ 879,
883
+ 880,
884
+ 881,
885
+ 882,
886
+ 883,
887
+ 884,
888
+ 885,
889
+ 886,
890
+ 887,
891
+ 888,
892
+ 889,
893
+ 890,
894
+ 891,
895
+ 892,
896
+ 893,
897
+ 894,
898
+ 895,
899
+ 896,
900
+ 897,
901
+ 898,
902
+ 899,
903
+ 900,
904
+ 901,
905
+ 902,
906
+ 903,
907
+ 904,
908
+ 905,
909
+ 906,
910
+ 907,
911
+ 908,
912
+ 909,
913
+ 910,
914
+ 911,
915
+ 912,
916
+ 913,
917
+ 914,
918
+ 915,
919
+ 916,
920
+ 917,
921
+ 918,
922
+ 919,
923
+ 920,
924
+ 921,
925
+ 922,
926
+ 923,
927
+ 924,
928
+ 925,
929
+ 926,
930
+ 927,
931
+ 928,
932
+ 929,
933
+ 930,
934
+ 931,
935
+ 932,
936
+ 933,
937
+ 934,
938
+ 935,
939
+ 936,
940
+ 937,
941
+ 938,
942
+ 939,
943
+ 940,
944
+ 941,
945
+ 942,
946
+ 943,
947
+ 944,
948
+ 945,
949
+ 946,
950
+ 947,
951
+ 948,
952
+ 949,
953
+ 950,
954
+ 951,
955
+ 952,
956
+ 953,
957
+ 954,
958
+ 955,
959
+ 956,
960
+ 957,
961
+ 958,
962
+ 959,
963
+ 960,
964
+ 961,
965
+ 962,
966
+ 963,
967
+ 964,
968
+ 965,
969
+ 966,
970
+ 967,
971
+ 968,
972
+ 969,
973
+ 970,
974
+ 971,
975
+ 972,
976
+ 973,
977
+ 974,
978
+ 975,
979
+ 976,
980
+ 977,
981
+ 978,
982
+ 979,
983
+ 980,
984
+ 981,
985
+ 982,
986
+ 983,
987
+ 984,
988
+ 985,
989
+ 986,
990
+ 987,
991
+ 988,
992
+ 989,
993
+ 990,
994
+ 991,
995
+ 992,
996
+ 993,
997
+ 994,
998
+ 995,
999
+ 996,
1000
+ 997,
1001
+ 998,
1002
+ 999
1003
+ ],
1004
+ "test": [],
1005
+ "validation": []
1006
+ }
ResNet-CIFAR10/Classification-mini/dataset/info.json ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ {
2
+ "model": "ResNet18",
3
+ "classes":["airplane", "automobile", "bird", "cat", "deer", "dog", "frog", "horse", "ship", "truck"]
4
+ }
ResNet-CIFAR10/Classification-mini/dataset/labels.npy ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a206deac3a30252ca2263d264fe73e6d244cf744aa7d7648ec1ecb2f40365c83
3
+ size 8128
ResNet-CIFAR10/Classification-mini/epochs/epoch_1/embeddings.npy ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:53a846d7369254a97a6d25b961c270aba6eb1f2412b965abbdac99f947bdef20
3
+ size 2048128
ResNet-CIFAR10/Classification-mini/epochs/epoch_1/model.pth ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:632e71f73fd7e91427ae7751fef67868bff2121b7dfb8eb725920b986f616557
3
+ size 44769410
ResNet-CIFAR10/Classification-mini/epochs/epoch_1/predictions.npy ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:684dd330d3801670400ae7d55ea86025da71abefc71bc24dad3dfb6acff14c1a
3
+ size 40128
ResNet-CIFAR10/Classification-mini/epochs/epoch_2/embeddings.npy ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:546e774f16d8c03b5031660f33252398485e5a7745893bb5d76d692fd002a94e
3
+ size 2048128
ResNet-CIFAR10/Classification-mini/epochs/epoch_2/model.pth ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b35d6b4f69c3222a40639005b29af06c8452a47b526ac429c7e4d6b1a78469db
3
+ size 44769410
ResNet-CIFAR10/Classification-mini/epochs/epoch_2/predictions.npy ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f6d02b608c6460d2e05148c0425a63813fe2e7bb0b5f9e191a1040c9e38748cc
3
+ size 40128
ResNet-CIFAR10/Classification-mini/epochs/epoch_3/embeddings.npy ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2f90aebde34e956f5ebf243c048eff51cf2545e0dc2566821c858f000373fa64
3
+ size 2048128
ResNet-CIFAR10/Classification-mini/epochs/epoch_3/model.pth ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2681f986928109f1e7d52633b206c310155ca5f7692cd0bba22826c59082fc9a
3
+ size 44769410
ResNet-CIFAR10/Classification-mini/epochs/epoch_3/predictions.npy ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c5d782cd395927cc26d438b9512d866d2b25ecd60d3e9357bc20c295dc7db96b
3
+ size 40128
ResNet-CIFAR10/Classification-mini/epochs/layer_info.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"layer_id": "avg_pool", "dim": 512}
ResNet-CIFAR10/Classification-mini/epochs/train.log ADDED
@@ -0,0 +1,6 @@
 
 
 
 
 
 
 
1
+ 2025-06-02 13:08:33,621 - train - INFO - 开始训练 ResNet18
2
+ 2025-06-02 13:08:33,622 - train - INFO - 总轮数: 3, 学习率: 0.1, 设备: cuda:0
3
+ 2025-06-02 13:08:47,699 - train - INFO - Epoch: 1 | Train Loss: 1.928 | Train Acc: 30.18% | Test Loss: 1.567 | Test Acc: 41.58%
4
+ 2025-06-02 13:09:02,228 - train - INFO - Epoch: 2 | Train Loss: 1.351 | Train Acc: 50.83% | Test Loss: 1.492 | Test Acc: 51.06%
5
+ 2025-06-02 13:09:16,149 - train - INFO - Epoch: 3 | Train Loss: 1.055 | Train Acc: 62.12% | Test Loss: 1.285 | Test Acc: 57.01%
6
+ 2025-06-02 13:09:16,746 - train - INFO - 训练完成!
ResNet-CIFAR10/Classification-mini/readme.md ADDED
@@ -0,0 +1,54 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # ResNet-CIFAR10 训练与特征提取
2
+
3
+ 这个项目实现了ResNet模型在CIFAR10数据集上的训练,并集成了特征提取和可视化所需的功能。
4
+
5
+ ## time_travel_saver数据提取器
6
+ ```python
7
+ #保存可视化训练过程所需要的文件
8
+ if (epoch + 1) % interval == 0 or (epoch == 0):
9
+ # 创建一个专门用于收集embedding的顺序dataloader
10
+ ordered_trainloader = torch.utils.data.DataLoader(
11
+ trainloader.dataset,
12
+ batch_size=trainloader.batch_size,
13
+ shuffle=False,
14
+ num_workers=trainloader.num_workers
15
+ )
16
+ epoch_save_dir = os.path.join(save_dir, f'epoch_{epoch+1}') #epoch保存路径
17
+ save_model = time_travel_saver(model, ordered_trainloader, device, epoch_save_dir, model_name,
18
+ show=True, layer_name='avg_pool', auto_save_embedding=True)
19
+ #show:是否显示模型的维度信息
20
+ #layer_name:选择要提取特征的层,如果为None,则提取符合维度范围的层
21
+ #auto_save_embedding:是否自动保存特征向量 must be True
22
+ save_model.save_checkpoint_embeddings_predictions() #保存模型权重、特征向量和预测结果到epoch_x
23
+ if epoch == 0:
24
+ save_model.save_lables_index(path = "../dataset") #保存标签和索引到dataset
25
+ ```
26
+
27
+
28
+ ## 项目结构
29
+
30
+ - `./scripts/train.yaml`:训练配置文件,包含批次大小、学习率、GPU设置等参数
31
+ - `./scripts/train.py`:训练脚本,执行模型训练并自动收集特征数据
32
+ - `./model/`:保存训练好的模型权重
33
+ - `./epochs/`:保存训练过程中的高维特征向量、预测结果等数据
34
+
35
+ ## 使用方法
36
+
37
+ 1. 配置 `train.yaml` 文件设置训练参数
38
+ 2. 执行训练脚本:
39
+ ```
40
+ python train.py
41
+ ```
42
+ 3. 训练完成后,可以在以下位置找到相关数据:
43
+ - 模型权重:`./epochs/epoch_{n}/model.pth`
44
+ - 特征向量:`./epochs/epoch_{n}/embeddings.npy`
45
+ - 预测结果:`./epochs/epoch_{n}/predictions.npy`
46
+ - 标签数据:`./dataset/labels.npy`
47
+ - 数据索引:`./dataset/index.json`
48
+
49
+ ## 数据格式
50
+
51
+ - `embeddings.npy`:形状为 [n_samples, feature_dim] 的特征向量
52
+ - `predictions.npy`:形状为 [n_samples, n_classes] 的预测概率
53
+ - `labels.npy`:形状为 [n_samples] 的真实标签
54
+ - `index.json`:包含训练集、测试集和验证集的索引信息
ResNet-CIFAR10/Classification-mini/scripts/dataset_utils.py ADDED
@@ -0,0 +1,59 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import torch
2
+ import torchvision
3
+ import torchvision.transforms as transforms
4
+ import os
5
+
6
+ #加载数据集
7
+
8
+ def get_cifar10_dataloaders(batch_size=128, num_workers=2, local_dataset_path=None, shuffle=False):
9
+ """获取CIFAR10数据集的数据加载器
10
+
11
+ Args:
12
+ batch_size: 批次大小
13
+ num_workers: 数据加载的工作进程数
14
+ local_dataset_path: 本地数据集路径,如果提供则使用本地数据集,否则下载
15
+
16
+ Returns:
17
+ trainloader: 训练数据加载器
18
+ testloader: 测试数据加载器
19
+ """
20
+ # 数据预处理
21
+ transform_train = transforms.Compose([
22
+ transforms.RandomCrop(32, padding=4),
23
+ transforms.RandomHorizontalFlip(),
24
+ transforms.ToTensor(),
25
+ transforms.Normalize((0.4914, 0.4822, 0.4465), (0.2023, 0.1994, 0.2010)),
26
+ ])
27
+
28
+ transform_test = transforms.Compose([
29
+ transforms.ToTensor(),
30
+ transforms.Normalize((0.4914, 0.4822, 0.4465), (0.2023, 0.1994, 0.2010)),
31
+ ])
32
+
33
+ # 设置数据集路径
34
+ if local_dataset_path:
35
+ print(f"使用本地数据集: {local_dataset_path}")
36
+ # 检查数据集路径是否有数据集,没有的话则下载
37
+ cifar_path = os.path.join(local_dataset_path, 'cifar-10-batches-py')
38
+ download = not os.path.exists(cifar_path) or not os.listdir(cifar_path)
39
+ dataset_path = local_dataset_path
40
+ else:
41
+ print("未指定本地数据集路径,将下载数据集")
42
+ download = True
43
+ dataset_path = '../dataset'
44
+
45
+ # 创建数据集路径
46
+ if not os.path.exists(dataset_path):
47
+ os.makedirs(dataset_path)
48
+
49
+ trainset = torchvision.datasets.CIFAR10(
50
+ root=dataset_path, train=True, download=download, transform=transform_train)
51
+ trainloader = torch.utils.data.DataLoader(
52
+ trainset, batch_size=batch_size, shuffle=shuffle, num_workers=num_workers)
53
+
54
+ testset = torchvision.datasets.CIFAR10(
55
+ root=dataset_path, train=False, download=download, transform=transform_test)
56
+ testloader = torch.utils.data.DataLoader(
57
+ testset, batch_size=batch_size, shuffle=shuffle, num_workers=num_workers)
58
+
59
+ return trainloader, testloader
ResNet-CIFAR10/Classification-mini/scripts/get_raw_data.py ADDED
@@ -0,0 +1,82 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #读取数据集,在../dataset/raw_data下按照数据集的完整排序,1.png,2.png,3.png,...保存
2
+
3
+ import os
4
+ import numpy as np
5
+ import torchvision
6
+ import torchvision.transforms as transforms
7
+ from PIL import Image
8
+ from tqdm import tqdm
9
+
10
+ def unpickle(file):
11
+ """读取CIFAR-10数据文件"""
12
+ import pickle
13
+ with open(file, 'rb') as fo:
14
+ dict = pickle.load(fo, encoding='bytes')
15
+ return dict
16
+
17
+ def save_images_from_cifar10(dataset_path, save_dir):
18
+ """从CIFAR-10数据集中保存图像
19
+
20
+ Args:
21
+ dataset_path: CIFAR-10数据集路径
22
+ save_dir: 图像保存路径
23
+ """
24
+ # 创建保存目录
25
+ os.makedirs(save_dir, exist_ok=True)
26
+
27
+ # 获取训练集数据
28
+ train_data = []
29
+ train_labels = []
30
+
31
+ # 读取训练数据
32
+ for i in range(1, 6):
33
+ batch_file = os.path.join(dataset_path, f'data_batch_{i}')
34
+ if os.path.exists(batch_file):
35
+ print(f"读取训练批次 {i}")
36
+ batch = unpickle(batch_file)
37
+ train_data.append(batch[b'data'])
38
+ train_labels.extend(batch[b'labels'])
39
+
40
+ # 合并所有训练数据
41
+ if train_data:
42
+ train_data = np.vstack(train_data)
43
+ train_data = train_data.reshape(-1, 3, 32, 32).transpose(0, 2, 3, 1)
44
+
45
+ # 读取测试数据
46
+ test_file = os.path.join(dataset_path, 'test_batch')
47
+ # if os.path.exists(test_file):
48
+ # print("读取测试数据")
49
+ # test_batch = unpickle(test_file)
50
+ # test_data = test_batch[b'data']
51
+ # test_labels = test_batch[b'labels']
52
+ # test_data = test_data.reshape(-1, 3, 32, 32).transpose(0, 2, 3, 1)
53
+ # else:
54
+ test_data = []
55
+ test_labels = []
56
+
57
+ # 合并训练和测试数据
58
+ all_data = np.concatenate([train_data, test_data]) if len(test_data) > 0 and len(train_data) > 0 else (train_data if len(train_data) > 0 else test_data)
59
+ all_labels = train_labels + test_labels if len(test_labels) > 0 and len(train_labels) > 0 else (train_labels if len(train_labels) > 0 else test_labels)
60
+
61
+ # 保存图像
62
+ print(f"保存 {len(all_data)} 张图像...")
63
+ for i, (img, label) in enumerate(tqdm(zip(all_data, all_labels), total=len(all_data))):
64
+ img = Image.fromarray(img)
65
+ img.save(os.path.join(save_dir, f"{i}.png"))
66
+
67
+ print(f"完成! {len(all_data)} 张图像已保存到 {save_dir}")
68
+
69
+ if __name__ == "__main__":
70
+ # 设置路径
71
+ dataset_path = "../dataset/cifar-10-batches-py"
72
+ save_dir = "../dataset/raw_data"
73
+
74
+ # 检查数据集是否存在,如果不存在则下载
75
+ if not os.path.exists(dataset_path):
76
+ print("数据集不存在,正在下载...")
77
+ os.makedirs("../dataset", exist_ok=True)
78
+ transform = transforms.Compose([transforms.ToTensor()])
79
+ trainset = torchvision.datasets.CIFAR10(root="../dataset", train=True, download=True, transform=transform)
80
+
81
+ # 保存图像
82
+ save_images_from_cifar10(dataset_path, save_dir)
ResNet-CIFAR10/Classification-mini/scripts/get_representation.py ADDED
@@ -0,0 +1,272 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import torch
2
+ import torch.nn as nn
3
+ import numpy as np
4
+ import os
5
+ import json
6
+ from tqdm import tqdm
7
+
8
+ class time_travel_saver:
9
+ """可视化数据提取器
10
+
11
+ 用于保存模型训练过程中的各种数据,包括:
12
+ 1. 模型权重 (.pth)
13
+ 2. 高维特征 (representation/*.npy)
14
+ 3. 预测结果 (prediction/*.npy)
15
+ 4. 标签数据 (label/labels.npy)
16
+ """
17
+
18
+ def __init__(self, model, dataloader, device, save_dir, model_name,
19
+ auto_save_embedding=False, layer_name=None,show = False):
20
+ """初始化
21
+
22
+ Args:
23
+ model: 要保存的模型实例
24
+ dataloader: 数据加载器(必须是顺序加载的)
25
+ device: 计算设备(cpu or gpu)
26
+ save_dir: 保存根目录
27
+ model_name: 模型名称
28
+ """
29
+ self.model = model
30
+ self.dataloader = dataloader
31
+ self.device = device
32
+ self.save_dir = save_dir
33
+ self.model_name = model_name
34
+ self.auto_save = auto_save_embedding
35
+ self.layer_name = layer_name
36
+
37
+ if show and not layer_name:
38
+ layer_dimensions = self.show_dimensions()
39
+ # print(layer_dimensions)
40
+
41
+ def show_dimensions(self):
42
+ """显示模型中所有层的名称和对应的维度
43
+
44
+ 这个函数会输出模型中所有层的名称和它们的输出维度,
45
+ 帮助用户选择合适的层来提取特征。
46
+
47
+ Returns:
48
+ layer_dimensions: 包含层名称和维度的字典
49
+ """
50
+ activation = {}
51
+ layer_dimensions = {}
52
+
53
+ def get_activation(name):
54
+ def hook(model, input, output):
55
+ activation[name] = output.detach()
56
+ return hook
57
+
58
+ # 注册钩子到所有层
59
+ handles = []
60
+ for name, module in self.model.named_modules():
61
+ if isinstance(module, nn.Module) and not isinstance(module, nn.ModuleList) and not isinstance(module, nn.ModuleDict):
62
+ handles.append(module.register_forward_hook(get_activation(name)))
63
+
64
+ self.model.eval()
65
+ with torch.no_grad():
66
+ # 获取一个batch来分析每层的输出维度
67
+ inputs, _ = next(iter(self.dataloader))
68
+ inputs = inputs.to(self.device)
69
+ _ = self.model(inputs)
70
+
71
+ # 分析所有层的输出维度
72
+ print("\n模型各层的名称和维度:")
73
+ print("-" * 50)
74
+ print(f"{'层名称':<40} {'特征维度':<15} {'输出形状'}")
75
+ print("-" * 50)
76
+
77
+ for name, feat in activation.items():
78
+ if feat is None:
79
+ continue
80
+
81
+ # 获取特征维度(展平后)
82
+ feat_dim = feat.view(feat.size(0), -1).size(1)
83
+ layer_dimensions[name] = feat_dim
84
+ # 打印层信息
85
+ shape_str = str(list(feat.shape))
86
+ print(f"{name:<40} {feat_dim:<15} {shape_str}")
87
+
88
+ print("-" * 50)
89
+ print("注: 特征维度是将输出张量展平后的维度大小")
90
+ print("你可以通过修改time_travel_saver的layer_name参数来选择不同的层")
91
+ print("例如:layer_name='avg_pool'或layer_name='layer4'等")
92
+
93
+ # 移除所有钩子
94
+ for handle in handles:
95
+ handle.remove()
96
+
97
+ return layer_dimensions
98
+
99
+ def _extract_features_and_predictions(self):
100
+ """提取特征和预测结果
101
+
102
+ Returns:
103
+ features: 高维特征 [样本数, 特征维度]
104
+ predictions: 预测结果 [样本数, 类别数]
105
+ """
106
+ features = []
107
+ predictions = []
108
+ indices = []
109
+ activation = {}
110
+
111
+ def get_activation(name):
112
+ def hook(model, input, output):
113
+ # 只在需要时保存激活值,避免内存浪费
114
+ if name not in activation or activation[name] is None:
115
+ activation[name] = output.detach()
116
+ return hook
117
+
118
+ # 根据层的名称或维度来选择层
119
+
120
+ # 注册钩子到所有层
121
+ handles = []
122
+ for name, module in self.model.named_modules():
123
+ if isinstance(module, nn.Module) and not isinstance(module, nn.ModuleList) and not isinstance(module, nn.ModuleDict):
124
+ handles.append(module.register_forward_hook(get_activation(name)))
125
+
126
+ self.model.eval()
127
+ with torch.no_grad():
128
+ # 首先获取一个batch来分析每层的输出维度
129
+ inputs, _ = next(iter(self.dataloader))
130
+ inputs = inputs.to(self.device)
131
+ _ = self.model(inputs)
132
+
133
+ # 如果指定了层名,则直接使用该层
134
+ if self.layer_name is not None:
135
+ if self.layer_name not in activation:
136
+ raise ValueError(f"指定的层 {self.layer_name} 不存在于模型中")
137
+
138
+ feat = activation[self.layer_name]
139
+ if feat is None:
140
+ raise ValueError(f"指定的层 {self.layer_name} 没有输出特征")
141
+
142
+ suitable_layer_name = self.layer_name
143
+ suitable_dim = feat.view(feat.size(0), -1).size(1)
144
+ print(f"使用指定的特征层: {suitable_layer_name}, 特征维度: {suitable_dim}")
145
+ else:
146
+ # 找到维度在指定范围内的层
147
+ target_dim_range = (256, 2048)
148
+ suitable_layer_name = None
149
+ suitable_dim = None
150
+
151
+ # 分析所有层的输出维度
152
+ for name, feat in activation.items():
153
+ if feat is None:
154
+ continue
155
+ feat_dim = feat.view(feat.size(0), -1).size(1)
156
+ if target_dim_range[0] <= feat_dim <= target_dim_range[1]:
157
+ suitable_layer_name = name
158
+ suitable_dim = feat_dim
159
+ break
160
+
161
+ if suitable_layer_name is None:
162
+ raise ValueError("没有找到合适维度的特征层")
163
+
164
+ print(f"自动选择的特征层: {suitable_layer_name}, 特征维度: {suitable_dim}")
165
+
166
+ # 保存层信息
167
+ layer_info = {
168
+ 'layer_id': suitable_layer_name,
169
+ 'dim': suitable_dim
170
+ }
171
+ layer_info_path = os.path.join(os.path.dirname(self.save_dir), 'layer_info.json')
172
+ with open(layer_info_path, 'w') as f:
173
+ json.dump(layer_info, f)
174
+
175
+ # 清除第一次运行的激活值
176
+ activation.clear()
177
+
178
+ # 现在处理所有数据
179
+ for batch_idx, (inputs, _) in enumerate(tqdm(self.dataloader, desc="提取特征和预测结果")):
180
+ inputs = inputs.to(self.device)
181
+ outputs = self.model(inputs) # 获取预测结果
182
+
183
+ # 获取并处理特征
184
+ feat = activation[suitable_layer_name]
185
+ flat_features = torch.flatten(feat, start_dim=1)
186
+ features.append(flat_features.cpu().numpy())
187
+ predictions.append(outputs.cpu().numpy())
188
+
189
+ # 清除本次的激活值
190
+ activation.clear()
191
+
192
+ # 移除所有钩子
193
+ for handle in handles:
194
+ handle.remove()
195
+
196
+ if len(features) > 0:
197
+ features = np.vstack(features)
198
+ predictions = np.vstack(predictions)
199
+ return features, predictions
200
+ else:
201
+ return np.array([]), np.array([])
202
+
203
+ def save_lables_index(self, path):
204
+ """保存标签数据和索引信息
205
+
206
+ Args:
207
+ path: 保存路径
208
+ """
209
+ os.makedirs(path, exist_ok=True)
210
+ labels_path = os.path.join(path, 'labels.npy')
211
+ index_path = os.path.join(path, 'index.json')
212
+
213
+ # 尝试从不同的属性获取标签
214
+ try:
215
+ if hasattr(self.dataloader.dataset, 'targets'):
216
+ # CIFAR10/CIFAR100使用targets属性
217
+ labels = np.array(self.dataloader.dataset.targets)
218
+ elif hasattr(self.dataloader.dataset, 'labels'):
219
+ # 某些数据集使用labels属性
220
+ labels = np.array(self.dataloader.dataset.labels)
221
+ else:
222
+ # 如果上面的方法都不起作用,则从数据加载器中收集标签
223
+ labels = []
224
+ for _, batch_labels in self.dataloader:
225
+ labels.append(batch_labels.numpy())
226
+ labels = np.concatenate(labels)
227
+
228
+ # 保存标签数据
229
+ np.save(labels_path, labels)
230
+ print(f"标签数据已保存到 {labels_path}")
231
+
232
+ # 创建数据集索引
233
+ num_samples = len(labels)
234
+ indices = list(range(num_samples))
235
+
236
+ # 创建索引字典
237
+ index_dict = {
238
+ "train": indices, # 所有数据默认为训练集
239
+ "test": [], # 初始为空
240
+ "validation": [] # 初始为空
241
+ }
242
+
243
+ # 保存索引到JSON文件
244
+ with open(index_path, 'w') as f:
245
+ json.dump(index_dict, f, indent=4)
246
+
247
+ print(f"数据集索引已保存到 {index_path}")
248
+
249
+ except Exception as e:
250
+ print(f"保存标签和索引时出错: {e}")
251
+
252
+ def save_checkpoint_embeddings_predictions(self, model = None):
253
+ """保存所有数据"""
254
+ if model is not None:
255
+ self.model = model
256
+ # 保存模型权重
257
+ os.makedirs(self.save_dir, exist_ok=True)
258
+ model_path = os.path.join(self.save_dir,'model.pth')
259
+ torch.save(self.model.state_dict(), model_path)
260
+
261
+ if self.auto_save:
262
+ # 提取并保存特征和预测结果
263
+ features, predictions = self._extract_features_and_predictions()
264
+
265
+ # 保存特征
266
+ np.save(os.path.join(self.save_dir, 'embeddings.npy'), features)
267
+ # 保存预测结果
268
+ np.save(os.path.join(self.save_dir, 'predictions.npy'), predictions)
269
+ print("\n保存了以下数据:")
270
+ print(f"- 模型权重: {model_path}")
271
+ print(f"- 特征向量: [样本数: {features.shape[0]}, 特征维度: {features.shape[1]}]")
272
+ print(f"- 预测结果: [样本数: {predictions.shape[0]}, 类别数: {predictions.shape[1]}]")
ResNet-CIFAR10/Classification-mini/scripts/model.py ADDED
@@ -0,0 +1,308 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ '''
2
+ ResNet in PyTorch.
3
+
4
+ ResNet(深度残差网络)是由微软研究院的Kaiming He等人提出的深度神经网络架构。
5
+ 主要创新点是引入了残差学习的概念,通过跳跃连接解决了深层网络的退化问题。
6
+
7
+ 主要特点:
8
+ 1. 引入残差块(Residual Block),使用跳跃连接
9
+ 2. 使用Batch Normalization进行归一化
10
+ 3. 支持更深的网络结构(最深可达152层)
11
+ 4. 在多个计算机视觉任务上取得了突破性进展
12
+
13
+ Reference:
14
+ [1] Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun
15
+ Deep Residual Learning for Image Recognition. arXiv:1512.03385
16
+ '''
17
+ import torch
18
+ import torch.nn as nn
19
+
20
+ class BasicBlock(nn.Module):
21
+ """基础残差块
22
+
23
+ 用于ResNet18/34等浅层网络。结构为:
24
+ x -> Conv -> BN -> ReLU -> Conv -> BN -> (+) -> ReLU
25
+ |------------------------------------------|
26
+
27
+ Args:
28
+ in_channels: 输入通道数
29
+ out_channels: 输出通道数
30
+ stride: 步长,用于下采样,默认为1
31
+
32
+ 注意:基础模块没有通道压缩,expansion=1
33
+ """
34
+ expansion = 1
35
+
36
+ def __init__(self, in_channels, out_channels, stride=1):
37
+ super(BasicBlock,self).__init__()
38
+ self.features = nn.Sequential(
39
+ nn.Conv2d(in_channels, out_channels, kernel_size=3, stride=stride, padding=1, bias=False),
40
+ nn.BatchNorm2d(out_channels),
41
+ nn.ReLU(True),
42
+ nn.Conv2d(out_channels,out_channels, kernel_size=3, stride=1, padding=1, bias=False),
43
+ nn.BatchNorm2d(out_channels)
44
+ )
45
+
46
+ # 如果输入输出维度不等,则使用1x1卷积层来改变维度
47
+ self.shortcut = nn.Sequential()
48
+ if stride != 1 or in_channels != self.expansion * out_channels:
49
+ self.shortcut = nn.Sequential(
50
+ nn.Conv2d(in_channels, self.expansion * out_channels, kernel_size=1, stride=stride, bias=False),
51
+ nn.BatchNorm2d(self.expansion * out_channels),
52
+ )
53
+
54
+ def forward(self, x):
55
+ out = self.features(x)
56
+ out += self.shortcut(x)
57
+ out = torch.relu(out)
58
+ return out
59
+
60
+
61
+ class Bottleneck(nn.Module):
62
+ """瓶颈残差块
63
+
64
+ 用于ResNet50/101/152等深层网络。结构为:
65
+ x -> 1x1Conv -> BN -> ReLU -> 3x3Conv -> BN -> ReLU -> 1x1Conv -> BN -> (+) -> ReLU
66
+ |-------------------------------------------------------------------|
67
+
68
+ Args:
69
+ in_channels: 输入通道数
70
+ zip_channels: 压缩后的通道数
71
+ stride: 步长,用于下采样,默认为1
72
+
73
+ 注意:通过1x1卷积先压缩通道数,再还原,expansion=4
74
+ """
75
+ expansion = 4
76
+
77
+ def __init__(self, in_channels, zip_channels, stride=1):
78
+ super(Bottleneck, self).__init__()
79
+ out_channels = self.expansion * zip_channels
80
+ self.features = nn.Sequential(
81
+ # 1x1卷积压缩通道
82
+ nn.Conv2d(in_channels, zip_channels, kernel_size=1, bias=False),
83
+ nn.BatchNorm2d(zip_channels),
84
+ nn.ReLU(inplace=True),
85
+ # 3x3卷积提取特征
86
+ nn.Conv2d(zip_channels, zip_channels, kernel_size=3, stride=stride, padding=1, bias=False),
87
+ nn.BatchNorm2d(zip_channels),
88
+ nn.ReLU(inplace=True),
89
+ # 1x1卷积还原通道
90
+ nn.Conv2d(zip_channels, out_channels, kernel_size=1, bias=False),
91
+ nn.BatchNorm2d(out_channels)
92
+ )
93
+
94
+ self.shortcut = nn.Sequential()
95
+ if stride != 1 or in_channels != out_channels:
96
+ self.shortcut = nn.Sequential(
97
+ nn.Conv2d(in_channels, out_channels, kernel_size=1, stride=stride, bias=False),
98
+ nn.BatchNorm2d(out_channels)
99
+ )
100
+
101
+ def forward(self, x):
102
+ out = self.features(x)
103
+ out += self.shortcut(x)
104
+ out = torch.relu(out)
105
+ return out
106
+
107
+ class ResNet(nn.Module):
108
+ """ResNet模型
109
+
110
+ 网络结构:
111
+ 1. 一个卷积层用于特征提取
112
+ 2. 四个残差层,每层包含多个残差块
113
+ 3. 平均池化和全连接层进行分类
114
+
115
+ 对于CIFAR10,特征图大小变化为:
116
+ (32,32,3) -> [Conv] -> (32,32,64) -> [Layer1] -> (32,32,64) -> [Layer2]
117
+ -> (16,16,128) -> [Layer3] -> (8,8,256) -> [Layer4] -> (4,4,512) -> [AvgPool]
118
+ -> (1,1,512) -> [FC] -> (num_classes)
119
+
120
+ Args:
121
+ block: 残差块类型(BasicBlock或Bottleneck)
122
+ num_blocks: 每层残差块数量的列表
123
+ num_classes: 分类数量,默认为10
124
+ verbose: 是否打印中间特征图大小
125
+ init_weights: 是否初始化权重
126
+ dropout: 是否在全连接层前使用dropout
127
+ """
128
+ def __init__(self, block, num_blocks, num_classes=10, verbose=False, init_weights=True, dropout=False):
129
+ super(ResNet, self).__init__()
130
+ self.verbose = verbose
131
+ self.in_channels = 64
132
+
133
+ # 第一层卷积
134
+ self.features = nn.Sequential(
135
+ nn.Conv2d(3, 64, kernel_size=3, stride=1, padding=1, bias=False),
136
+ nn.BatchNorm2d(64),
137
+ nn.ReLU(inplace=True)
138
+ )
139
+
140
+ # 四个残差层
141
+ self.layer1 = self._make_layer(block, 64, num_blocks[0], stride=1)
142
+ self.layer2 = self._make_layer(block, 128, num_blocks[1], stride=2)
143
+ self.layer3 = self._make_layer(block, 256, num_blocks[2], stride=2)
144
+ self.layer4 = self._make_layer(block, 512, num_blocks[3], stride=2)
145
+
146
+ # 分类层
147
+ self.avg_pool = nn.AvgPool2d(kernel_size=4)
148
+ if dropout:
149
+ self.dropout = nn.Dropout(p=0.5)
150
+ else:
151
+ self.dropout = nn.Identity()
152
+ self.classifier = nn.Linear(512 * block.expansion, num_classes)
153
+
154
+ if init_weights:
155
+ self._initialize_weights()
156
+
157
+ def _make_layer(self, block, out_channels, num_blocks, stride):
158
+ """构建残差层
159
+
160
+ Args:
161
+ block: 残差块类型
162
+ out_channels: 输出通道数
163
+ num_blocks: 残差块数量
164
+ stride: 第一个残差块的步长(用于下采样)
165
+
166
+ Returns:
167
+ nn.Sequential: 残差层
168
+ """
169
+ strides = [stride] + [1] * (num_blocks - 1)
170
+ layers = []
171
+ for stride in strides:
172
+ layers.append(block(self.in_channels, out_channels, stride))
173
+ self.in_channels = out_channels * block.expansion
174
+ return nn.Sequential(*layers)
175
+
176
+ def forward(self, x):
177
+ """前向传播
178
+
179
+ Args:
180
+ x: 输入张量,[N,3,32,32]
181
+
182
+ Returns:
183
+ out: 输出张量,[N,num_classes]
184
+ """
185
+ out = self.features(x)
186
+ if self.verbose:
187
+ print('block 1 output: {}'.format(out.shape))
188
+
189
+ out = self.layer1(out)
190
+ if self.verbose:
191
+ print('block 2 output: {}'.format(out.shape))
192
+
193
+ out = self.layer2(out)
194
+ if self.verbose:
195
+ print('block 3 output: {}'.format(out.shape))
196
+
197
+ out = self.layer3(out)
198
+ if self.verbose:
199
+ print('block 4 output: {}'.format(out.shape))
200
+
201
+ out = self.layer4(out)
202
+ if self.verbose:
203
+ print('block 5 output: {}'.format(out.shape))
204
+
205
+ out = self.avg_pool(out)
206
+ out = out.view(out.size(0), -1)
207
+ out = self.dropout(out)
208
+ out = self.classifier(out)
209
+ return out
210
+
211
+ def feature(self,x):
212
+ """前向传播
213
+
214
+ Args:
215
+ x: 输入张量,[N,3,32,32]
216
+
217
+ Returns:
218
+ out: 输出张量,[N,num_classes]
219
+ """
220
+ out = self.features(x)
221
+ if self.verbose:
222
+ print('block 1 output: {}'.format(out.shape))
223
+
224
+ out = self.layer1(out)
225
+ if self.verbose:
226
+ print('block 2 output: {}'.format(out.shape))
227
+
228
+ out = self.layer2(out)
229
+ if self.verbose:
230
+ print('block 3 output: {}'.format(out.shape))
231
+
232
+ out = self.layer3(out)
233
+ if self.verbose:
234
+ print('block 4 output: {}'.format(out.shape))
235
+
236
+ out = self.layer4(out)
237
+ if self.verbose:
238
+ print('block 5 output: {}'.format(out.shape))
239
+
240
+ out = self.avg_pool(out)
241
+ out = out.view(out.size(0), -1)
242
+ return out
243
+
244
+ def prediction(self, x):
245
+ out = self.classifier(x)
246
+ return out
247
+
248
+ def _initialize_weights(self):
249
+ """初始化模型权重
250
+
251
+ 采用kaiming初始化方法:
252
+ - 卷积层权重采用kaiming_normal_初始化
253
+ - BN层参数采用常数初始化
254
+ - 线性层采用正态分布初始化
255
+ """
256
+ for m in self.modules():
257
+ if isinstance(m, nn.Conv2d):
258
+ nn.init.kaiming_normal_(m.weight, mode='fan_out', nonlinearity='relu')
259
+ if m.bias is not None:
260
+ nn.init.constant_(m.bias, 0)
261
+ elif isinstance(m, nn.BatchNorm2d):
262
+ nn.init.constant_(m.weight, 1)
263
+ nn.init.constant_(m.bias, 0)
264
+ elif isinstance(m, nn.Linear):
265
+ nn.init.normal_(m.weight, 0, 0.01)
266
+ nn.init.constant_(m.bias, 0)
267
+
268
+ def ResNet18(verbose=False, num_classes=10, dropout=False):
269
+ """ResNet-18模型
270
+
271
+ Args:
272
+ verbose: 是否打印中间特征图大小
273
+ num_classes: 分类数量
274
+ dropout: 是否在全连接层前使用dropout
275
+ """
276
+ return ResNet(BasicBlock, [2,2,2,2], num_classes=num_classes, verbose=verbose, dropout=dropout)
277
+
278
+ def ResNet34(verbose=False, num_classes=10, dropout=False):
279
+ """ResNet-34模型"""
280
+ return ResNet(BasicBlock, [3,4,6,3], num_classes=num_classes, verbose=verbose, dropout=dropout)
281
+
282
+ def ResNet50(verbose=False):
283
+ """ResNet-50模型"""
284
+ return ResNet(Bottleneck, [3,4,6,3], verbose=verbose)
285
+
286
+ def ResNet101(verbose=False):
287
+ """ResNet-101模型"""
288
+ return ResNet(Bottleneck, [3,4,23,3], verbose=verbose)
289
+
290
+ def ResNet152(verbose=False):
291
+ """ResNet-152模型"""
292
+ return ResNet(Bottleneck, [3,8,36,3], verbose=verbose)
293
+
294
+ def test():
295
+ """测试函数"""
296
+ net = ResNet34()
297
+ x = torch.randn(2,3,32,32)
298
+ y = net(x)
299
+ print('Output shape:', y.size())
300
+
301
+ # 打印模型结构
302
+ from torchinfo import summary
303
+ device = 'cuda' if torch.cuda.is_available() else 'cpu'
304
+ net = net.to(device)
305
+ summary(net,(2,3,32,32))
306
+
307
+ if __name__ == '__main__':
308
+ test()
ResNet-CIFAR10/Classification-mini/scripts/train.py ADDED
@@ -0,0 +1,225 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import sys
2
+ import os
3
+ import yaml
4
+ from pathlib import Path
5
+ import torch
6
+ import torch.nn as nn
7
+ import torch.optim as optim
8
+ import time
9
+ import logging
10
+ import numpy as np
11
+ from tqdm import tqdm
12
+
13
+
14
+ from dataset_utils import get_cifar10_dataloaders
15
+ from model import ResNet18
16
+ from get_representation import time_travel_saver
17
+
18
+ def setup_logger(log_file):
19
+ """配置日志记录器,如果日志文件存在则覆盖
20
+
21
+ Args:
22
+ log_file: 日志文件路径
23
+
24
+ Returns:
25
+ logger: 配置好的日志记录器
26
+ """
27
+ # 创建logger
28
+ logger = logging.getLogger('train')
29
+ logger.setLevel(logging.INFO)
30
+
31
+ # 移除现有的处理器
32
+ if logger.hasHandlers():
33
+ logger.handlers.clear()
34
+
35
+ # 创建文件处理器,使用'w'模式覆盖现有文件
36
+ fh = logging.FileHandler(log_file, mode='w')
37
+ fh.setLevel(logging.INFO)
38
+
39
+ # 创建控制台处理器
40
+ ch = logging.StreamHandler()
41
+ ch.setLevel(logging.INFO)
42
+
43
+ # 创建格式器
44
+ formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
45
+ fh.setFormatter(formatter)
46
+ ch.setFormatter(formatter)
47
+
48
+ # 添加处理器
49
+ logger.addHandler(fh)
50
+ logger.addHandler(ch)
51
+
52
+ return logger
53
+
54
+ def train_model(model, trainloader, testloader, epochs=200, lr=0.1, device='cuda:0',
55
+ save_dir='./epochs', model_name='model', interval=1):
56
+ """通用的模型训练函数
57
+ Args:
58
+ model: 要训练的模型
59
+ trainloader: 训练数据加载器
60
+ testloader: 测试数据加载器
61
+ epochs: 训练轮数
62
+ lr: 学习率
63
+ device: 训练设备,格式为'cuda:N',其中N为GPU编号(0,1,2,3)
64
+ save_dir: 模型保存目录
65
+ model_name: 模型名称
66
+ interval: 模型保存间隔
67
+ """
68
+ # 检查并设置GPU设备
69
+ if not torch.cuda.is_available():
70
+ print("CUDA不可用,将使用CPU训练")
71
+ device = 'cpu'
72
+ elif not device.startswith('cuda:'):
73
+ device = f'cuda:0'
74
+
75
+ # 确保device格式正确
76
+ if device.startswith('cuda:'):
77
+ gpu_id = int(device.split(':')[1])
78
+ if gpu_id >= torch.cuda.device_count():
79
+ print(f"GPU {gpu_id} 不可用,将使用GPU 0")
80
+ device = 'cuda:0'
81
+
82
+ # 设置保存目录
83
+ if not os.path.exists(save_dir):
84
+ os.makedirs(save_dir)
85
+
86
+ # 设置日志文件路径
87
+ log_file = os.path.join(os.path.dirname(save_dir),'epochs', 'train.log')
88
+ if not os.path.exists(os.path.dirname(log_file)):
89
+ os.makedirs(os.path.dirname(log_file))
90
+
91
+ logger = setup_logger(log_file)
92
+
93
+ # 损失函数和优化器
94
+ criterion = nn.CrossEntropyLoss()
95
+ optimizer = optim.SGD(model.parameters(), lr=lr, momentum=0.9, weight_decay=5e-4)
96
+ scheduler = torch.optim.lr_scheduler.CosineAnnealingLR(optimizer, T_max=50)
97
+
98
+ # 移动模型到指定设备
99
+ model = model.to(device)
100
+ best_acc = 0
101
+ start_time = time.time()
102
+
103
+ logger.info(f'开始训练 {model_name}')
104
+ logger.info(f'总轮数: {epochs}, 学习率: {lr}, 设备: {device}')
105
+
106
+ for epoch in range(epochs):
107
+ # 训练阶段
108
+ model.train()
109
+ train_loss = 0
110
+ correct = 0
111
+ total = 0
112
+
113
+ train_pbar = tqdm(trainloader, desc=f'Epoch {epoch+1}/{epochs} [Train]')
114
+ for batch_idx, (inputs, targets) in enumerate(train_pbar):
115
+ inputs, targets = inputs.to(device), targets.to(device)
116
+ optimizer.zero_grad()
117
+ outputs = model(inputs)
118
+ loss = criterion(outputs, targets)
119
+ loss.backward()
120
+ optimizer.step()
121
+
122
+ train_loss += loss.item()
123
+ _, predicted = outputs.max(1)
124
+ total += targets.size(0)
125
+ correct += predicted.eq(targets).sum().item()
126
+
127
+ # 更新进度条
128
+ train_pbar.set_postfix({
129
+ 'loss': f'{train_loss/(batch_idx+1):.3f}',
130
+ 'acc': f'{100.*correct/total:.2f}%'
131
+ })
132
+
133
+ # 保存训练阶段的准确率
134
+ train_acc = 100.*correct/total
135
+ train_correct = correct
136
+ train_total = total
137
+
138
+ # 测试阶段
139
+ model.eval()
140
+ test_loss = 0
141
+ correct = 0
142
+ total = 0
143
+
144
+ test_pbar = tqdm(testloader, desc=f'Epoch {epoch+1}/{epochs} [Test]')
145
+ with torch.no_grad():
146
+ for batch_idx, (inputs, targets) in enumerate(test_pbar):
147
+ inputs, targets = inputs.to(device), targets.to(device)
148
+ outputs = model(inputs)
149
+ loss = criterion(outputs, targets)
150
+
151
+ test_loss += loss.item()
152
+ _, predicted = outputs.max(1)
153
+ total += targets.size(0)
154
+ correct += predicted.eq(targets).sum().item()
155
+
156
+ # 更新进度条
157
+ test_pbar.set_postfix({
158
+ 'loss': f'{test_loss/(batch_idx+1):.3f}',
159
+ 'acc': f'{100.*correct/total:.2f}%'
160
+ })
161
+
162
+ # 计算测试精度
163
+ acc = 100.*correct/total
164
+
165
+ # 记录训练和测试的损失与准确率
166
+ logger.info(f'Epoch: {epoch+1} | Train Loss: {train_loss/(len(trainloader)):.3f} | Train Acc: {train_acc:.2f}% | '
167
+ f'Test Loss: {test_loss/(batch_idx+1):.3f} | Test Acc: {acc:.2f}%')
168
+
169
+ # 保存可视化训练过程所需要的文件
170
+ if (epoch + 1) % interval == 0 or (epoch == 0):
171
+ # 只使用前1000个样本进行保存
172
+ subset_indices = list(range(1000)) # 只取前1000个样本的索引
173
+ subset_dataset = torch.utils.data.Subset(trainloader.dataset, subset_indices)
174
+
175
+ # 创建一个只包含前1000个样本的顺序dataloader
176
+ ordered_trainloader = torch.utils.data.DataLoader(
177
+ subset_dataset,
178
+ batch_size=trainloader.batch_size,
179
+ shuffle=False,
180
+ num_workers=trainloader.num_workers
181
+ )
182
+
183
+ epoch_save_dir = os.path.join(save_dir, f'epoch_{epoch+1}')
184
+ save_model = time_travel_saver(model, ordered_trainloader, device, epoch_save_dir, model_name,
185
+ show=True, layer_name='avg_pool', auto_save_embedding=True)
186
+ save_model.save_checkpoint_embeddings_predictions()
187
+ if epoch == 0:
188
+ save_model.save_lables_index(path = "../dataset")
189
+
190
+ scheduler.step()
191
+
192
+ logger.info('训练完成!')
193
+
194
+ def main():
195
+ # 加载配置文件
196
+ config_path = Path(__file__).parent / 'train.yaml'
197
+ with open(config_path) as f:
198
+ config = yaml.safe_load(f)
199
+
200
+ # 创建模型
201
+ model = ResNet18(num_classes=10)
202
+
203
+ # 获取数据加载器
204
+ trainloader, testloader = get_cifar10_dataloaders(
205
+ batch_size=128,
206
+ num_workers=2,
207
+ local_dataset_path=config['dataset_path'],
208
+ shuffle=True
209
+ )
210
+
211
+ # 训练模型
212
+ train_model(
213
+ model=model,
214
+ trainloader=trainloader,
215
+ testloader=testloader,
216
+ epochs=config['epochs'],
217
+ lr=config['lr'],
218
+ device=f'cuda:{config["gpu"]}',
219
+ save_dir='../epochs',
220
+ model_name='ResNet18',
221
+ interval=config['interval']
222
+ )
223
+
224
+ if __name__ == '__main__':
225
+ main()
ResNet-CIFAR10/Classification-mini/scripts/train.yaml ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ batch_size: 128
2
+ num_workers: 2
3
+ dataset_path: ../dataset
4
+ epochs: 3
5
+ gpu: 0
6
+ lr: 0.1
7
+ interval: 1