ykhrustalev commited on
Commit
b624815
·
verified ·
1 Parent(s): 20f92cc

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -1
README.md CHANGED
@@ -77,7 +77,7 @@ LFM2.5-350M is a general-purpose text-only model with the following features:
77
  | [LFM2.5-350M-GGUF](https://huggingface.co/LiquidAI/LFM2.5-350M-GGUF) | Quantized format for llama.cpp and compatible tools. Optimized for CPU inference and local deployment with reduced memory usage. |
78
  | [LFM2.5-350M-ONNX](https://huggingface.co/LiquidAI/LFM2.5-350M-ONNX) | ONNX Runtime format for cross-platform deployment. Enables hardware-accelerated inference across diverse environments (cloud, edge, mobile). |
79
  | [LFM2.5-350M-MLX](https://huggingface.co/LiquidAI/LFM2.5-350M-MLX-8bit) | MLX format for Apple Silicon. Optimized for fast inference on Mac devices using the MLX framework. |
80
- | [LFM2.5-350M-OpenVINO](https://huggingface.co/LiquidAI/LFM2.5-350M-OpenVINO) | OpenVINO format for Intel hardware acceleration. Optimized for efficient inference on Intel CPUs, GPUs, and NPUs using the OpenVINO toolkit. |
81
 
82
  We recommend using it for data extraction, structured outputs, and tool use. It is not recommended for knowledge-intensive tasks and programming.
83
 
@@ -131,6 +131,7 @@ LFM2.5 is supported by many inference frameworks. See the [Inference documentati
131
  | [llama.cpp](https://github.com/ggml-org/llama.cpp) | Cross-platform inference with CPU offloading. | <a href="https://docs.liquid.ai/lfm/inference/llama-cpp">Link</a> | <a href="https://colab.research.google.com/drive/1ohLl3w47OQZA4ELo46i5E4Z6oGWBAyo8?usp=sharing"><img src="https://cdn-uploads.huggingface.co/production/uploads/61b8e2ba285851687028d395/vlOyMEjwHa_b_LXysEu2E.png" width="110" alt="Colab link"></a> |
132
  | [MLX](https://github.com/ml-explore/mlx) | Apple's machine learning framework optimized for Apple Silicon. | <a href="https://docs.liquid.ai/lfm/inference/mlx">Link</a> | — |
133
  | [LM Studio](https://lmstudio.ai/) | Desktop application for running LLMs locally. | <a href="https://docs.liquid.ai/lfm/inference/lm-studio">Link</a> | — |
 
134
 
135
  Here's a quick start example with Transformers:
136
 
 
77
  | [LFM2.5-350M-GGUF](https://huggingface.co/LiquidAI/LFM2.5-350M-GGUF) | Quantized format for llama.cpp and compatible tools. Optimized for CPU inference and local deployment with reduced memory usage. |
78
  | [LFM2.5-350M-ONNX](https://huggingface.co/LiquidAI/LFM2.5-350M-ONNX) | ONNX Runtime format for cross-platform deployment. Enables hardware-accelerated inference across diverse environments (cloud, edge, mobile). |
79
  | [LFM2.5-350M-MLX](https://huggingface.co/LiquidAI/LFM2.5-350M-MLX-8bit) | MLX format for Apple Silicon. Optimized for fast inference on Mac devices using the MLX framework. |
80
+ | [LFM2.5-350M-OpenVINO](https://huggingface.co/OpenVINO/LFM2.5-350M-int8-ov) | OpenVINO format for Intel hardware acceleration. Optimized for efficient inference on Intel CPUs, GPUs, and NPUs. |
81
 
82
  We recommend using it for data extraction, structured outputs, and tool use. It is not recommended for knowledge-intensive tasks and programming.
83
 
 
131
  | [llama.cpp](https://github.com/ggml-org/llama.cpp) | Cross-platform inference with CPU offloading. | <a href="https://docs.liquid.ai/lfm/inference/llama-cpp">Link</a> | <a href="https://colab.research.google.com/drive/1ohLl3w47OQZA4ELo46i5E4Z6oGWBAyo8?usp=sharing"><img src="https://cdn-uploads.huggingface.co/production/uploads/61b8e2ba285851687028d395/vlOyMEjwHa_b_LXysEu2E.png" width="110" alt="Colab link"></a> |
132
  | [MLX](https://github.com/ml-explore/mlx) | Apple's machine learning framework optimized for Apple Silicon. | <a href="https://docs.liquid.ai/lfm/inference/mlx">Link</a> | — |
133
  | [LM Studio](https://lmstudio.ai/) | Desktop application for running LLMs locally. | <a href="https://docs.liquid.ai/lfm/inference/lm-studio">Link</a> | — |
134
+ | [OpenVINO](https://www.intel.com/content/www/us/en/developer/tools/openvino-toolkit/overview.html) | Intel's toolkit for optimized inference on CPUs, GPUs, and NPUs. | <a href="https://docs.openvino.ai/2026/index.html">Link</a> | — |
135
 
136
  Here's a quick start example with Transformers:
137