radinplaid commited on
Commit
6ee6cc0
verified
1 Parent(s): b9193de

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +11 -8
README.md CHANGED
@@ -38,7 +38,7 @@ model-index:
38
 
39
  ## Try it on our Huggingface Space
40
 
41
- Give it a try before downloading here: https://huggingface.co/spaces/quickmt/QuickMT-Demo
42
 
43
 
44
  ## Model Information
@@ -56,22 +56,25 @@ See the `eole` model configuration in this repository for further details and th
56
 
57
  You must install the Nvidia cuda toolkit first, if you want to do GPU inference.
58
 
59
- Next, install the `quickmt` python library and download the model:
60
 
61
  ```bash
62
  git clone https://github.com/quickmt/quickmt.git
63
  pip install ./quickmt/
64
-
65
- quickmt-model-download quickmt/quickmt-he-en ./quickmt-he-en
66
  ```
67
 
68
- Finally use the model in python:
69
 
70
  ```python
71
  from quickmt import Translator
72
-
73
- # Auto-detects GPU, set to "cpu" to force CPU inference
74
- t = Translator("./quickmt-he-en/", device="auto")
 
 
 
 
 
75
 
76
  # Translate - set beam size to 1 for faster speed (but lower quality)
77
  sample_text = '"讚专 讗讛讜讚 讗讜专, 驻专讜驻住讜专 诇专驻讜讗讛 讘讗讜谞讬讘专住讬讟转 讚诇讛讗讜讝讬 讘讛诇讬驻拽住, 谞讜讘讛 住拽讜讟讬讛 讜专讗砖 讛诪讞诇拽讛 讛拽诇讬谞讬转 讜讛诪讚注讬转 砖诇 讗专讙讜谉 讞拽专 讛住讜讻专转 讛拽谞讚讬 讛讝讛讬专 砖讛诪讞拽专 注讚讬讬谉 讘讬诪讬讜 讛专讗砖讜谞讬诐."'
 
38
 
39
  ## Try it on our Huggingface Space
40
 
41
+ Give it a try before downloading here: https://huggingface.co/spaces/quickmt/QuickMT-gui
42
 
43
 
44
  ## Model Information
 
56
 
57
  You must install the Nvidia cuda toolkit first, if you want to do GPU inference.
58
 
59
+ Next, install the `quickmt` [python library](github.com/quickmt/quickmt).
60
 
61
  ```bash
62
  git clone https://github.com/quickmt/quickmt.git
63
  pip install ./quickmt/
 
 
64
  ```
65
 
66
+ Finally, use the model in python:
67
 
68
  ```python
69
  from quickmt import Translator
70
+ from huggingface_hub import snapshot_download
71
+
72
+ # Download Model (if not downloaded already) and return path to local model
73
+ # Device is either 'auto', 'cpu' or 'cuda'
74
+ t = Translator(
75
+ snapshot_download("quickmt/quickmt-he-en", ignore_patterns="eole-model/*"),
76
+ device="cpu"
77
+ )
78
 
79
  # Translate - set beam size to 1 for faster speed (but lower quality)
80
  sample_text = '"讚专 讗讛讜讚 讗讜专, 驻专讜驻住讜专 诇专驻讜讗讛 讘讗讜谞讬讘专住讬讟转 讚诇讛讗讜讝讬 讘讛诇讬驻拽住, 谞讜讘讛 住拽讜讟讬讛 讜专讗砖 讛诪讞诇拽讛 讛拽诇讬谞讬转 讜讛诪讚注讬转 砖诇 讗专讙讜谉 讞拽专 讛住讜讻专转 讛拽谞讚讬 讛讝讛讬专 砖讛诪讞拽专 注讚讬讬谉 讘讬诪讬讜 讛专讗砖讜谞讬诐."'