gacfox commited on
Commit
0c9c39d
·
verified ·
1 Parent(s): a63464c

update README

Browse files
Files changed (1) hide show
  1. README.md +64 -1
README.md CHANGED
@@ -4,4 +4,67 @@ base_model:
4
  - timm/swin_base_patch4_window7_224.ms_in22k_ft_in1k
5
  pipeline_tag: image-classification
6
  library_name: timm
7
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
4
  - timm/swin_base_patch4_window7_224.ms_in22k_ft_in1k
5
  pipeline_tag: image-classification
6
  library_name: timm
7
+ ---
8
+
9
+ # PowerPoint slide classifier
10
+
11
+ This is a classifier to classify 5 types of PowerPoint slide layouts. Finetuned from `timm/swin_base_patch4_window7_224.ms_in22k_ft_in1k` and trained on 10k powerpoint slide images.
12
+
13
+ ## Usage
14
+
15
+ ### Install timm and dependencies
16
+
17
+ ```bash
18
+ pip install timm==1.0.15 torch==2.7.0 torchvision==0.22.0
19
+ ```
20
+
21
+ ### Inference
22
+
23
+ Use the following code to classify images from a folder.
24
+
25
+ ```python
26
+ import os
27
+ import timm
28
+ import torch
29
+ from PIL import Image
30
+ from torchvision import transforms
31
+
32
+ device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
33
+ image_folder = 'path_to_images'
34
+
35
+ transform = transforms.Compose([
36
+ transforms.Resize((224, 224)),
37
+ transforms.ToTensor(),
38
+ transforms.Normalize(
39
+ mean=[0.485, 0.456, 0.406],
40
+ std=[0.229, 0.224, 0.225]
41
+ )
42
+ ])
43
+
44
+ model = timm.create_model('swin_base_patch4_window7_224', pretrained=False, num_classes=5)
45
+ model.load_state_dict(torch.load('pytorch_model.bin'))
46
+ model.to(device)
47
+ model.eval()
48
+
49
+ image_files = [f for f in os.listdir(image_folder) if f.lower().endswith('.png')]
50
+
51
+ idx_to_class = {
52
+ 0: 'content',
53
+ 1: 'end',
54
+ 2: 'start',
55
+ 3: 'subt',
56
+ 4: 'subtl'
57
+ }
58
+
59
+ with torch.no_grad():
60
+ for image_name in image_files:
61
+ image_path = os.path.join(image_folder, image_name)
62
+ image = Image.open(image_path).convert('RGB')
63
+ input_tensor = transform(image).unsqueeze(0).to(device)
64
+
65
+ output = model(input_tensor)
66
+ predicted_class = torch.argmax(output, dim=1).item()
67
+ predicted_label = idx_to_class[predicted_class]
68
+
69
+ print(f"{image_name} --> {predicted_label}")
70
+ ```