nielsr HF staff commited on
Commit
6b044a5
·
1 Parent(s): de1ac1b

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -3
README.md CHANGED
@@ -17,7 +17,7 @@ widget:
17
 
18
  # LeViT
19
 
20
- LeViT128S model pre-trained on ImageNet-1k at resolution 224x224. It was introduced in the paper [LeViT: a Vision Transformer in ConvNet's Clothing for Faster Inference
21
  ](https://arxiv.org/abs/2104.01136) by Graham et al. and first released in [this repository](https://github.com/facebookresearch/LeViT).
22
 
23
  Disclaimer: The team releasing LeViT did not write a model card for this model so this model card has been written by the Hugging Face team.
@@ -34,8 +34,8 @@ import requests
34
  url = 'http://images.cocodataset.org/val2017/000000039769.jpg'
35
  image = Image.open(requests.get(url, stream=True).raw)
36
 
37
- feature_extractor = LevitFeatureExtractor.from_pretrained('anugunj/levit-128')
38
- model = LevitForImageClassificationWithTeacher.from_pretrained('anugunj/levit-128')
39
 
40
  inputs = feature_extractor(images=image, return_tensors="pt")
41
  outputs = model(**inputs)
 
17
 
18
  # LeViT
19
 
20
+ LeViT-128 model pre-trained on ImageNet-1k at resolution 224x224. It was introduced in the paper [LeViT: a Vision Transformer in ConvNet's Clothing for Faster Inference
21
  ](https://arxiv.org/abs/2104.01136) by Graham et al. and first released in [this repository](https://github.com/facebookresearch/LeViT).
22
 
23
  Disclaimer: The team releasing LeViT did not write a model card for this model so this model card has been written by the Hugging Face team.
 
34
  url = 'http://images.cocodataset.org/val2017/000000039769.jpg'
35
  image = Image.open(requests.get(url, stream=True).raw)
36
 
37
+ feature_extractor = LevitFeatureExtractor.from_pretrained('facebook/levit-128')
38
+ model = LevitForImageClassificationWithTeacher.from_pretrained('facebook/levit-128')
39
 
40
  inputs = feature_extractor(images=image, return_tensors="pt")
41
  outputs = model(**inputs)