Nope137 commited on
Commit
c0937c0
·
1 Parent(s): 2f09f59

Describe your changes here

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. .cursor/rules/olv-core-rules.mdc +111 -0
  2. .dockerignore +79 -0
  3. .gemini/GEMINI.md +106 -0
  4. .gemini/styleguide.md +165 -0
  5. .gitattributes +1 -35
  6. .github/FUNDING.yml +14 -0
  7. .github/ISSUE_TEMPLATE/bug---question---get-help---bug---提问---求助.md +79 -0
  8. .github/ISSUE_TEMPLATE/feature-request---功能建议.md +44 -0
  9. .github/copilot-instructions.md +106 -0
  10. .github/workflows/codeql.yml +92 -0
  11. .github/workflows/create_release.yml +238 -0
  12. .github/workflows/docker-blacksmith.yml +207 -0
  13. .github/workflows/fossa_scan.yml +16 -0
  14. .github/workflows/ruff.yml +8 -0
  15. .github/workflows/update-requirements.yml +30 -0
  16. .gitignore +78 -0
  17. .gitmodules +4 -0
  18. .pre-commit-config.yaml +9 -0
  19. .python-version +1 -0
  20. CLAUDE.md +156 -0
  21. CONTRIBUTING.md +4 -0
  22. Dockerfile +81 -0
  23. README.md +158 -10
  24. doc/README.md +4 -0
  25. doc/sample_conf/sherpaASRTTS_sense_voice_melo.yaml +78 -0
  26. doc/sample_conf/sherpaASRTTS_sense_voice_piper_en.yaml +77 -0
  27. doc/sample_conf/sherpaASRTTS_sense_voice_vits_zh.yaml +77 -0
  28. doc/sample_conf/sherpaASR_paraformer.yaml +65 -0
  29. doc/sample_conf/sherpaASR_sense_voice.yaml +67 -0
  30. model_dict.json +30 -0
  31. pixi.lock +1652 -0
  32. prompts/README.md +17 -0
  33. prompts/__init__.py +0 -0
  34. prompts/prompt_loader.py +74 -0
  35. prompts/utils/concise_style_prompt.txt +18 -0
  36. prompts/utils/group_conversation_prompt.txt +7 -0
  37. prompts/utils/live2d_expression_prompt.txt +14 -0
  38. prompts/utils/live_prompt.txt +9 -0
  39. prompts/utils/mcp_prompt.txt +36 -0
  40. prompts/utils/proactive_speak_prompt.txt +1 -0
  41. prompts/utils/speakable_prompt.txt +13 -0
  42. prompts/utils/think_tag_prompt.txt +6 -0
  43. prompts/utils/tool_guidance_prompt.txt +1 -0
  44. pyproject.toml +68 -0
  45. run_server.py +178 -0
  46. scripts/run_bilibili_live.py +62 -0
  47. src/open_llm_vtuber/__init__.py +0 -0
  48. src/open_llm_vtuber/agent/__init__.py +0 -0
  49. src/open_llm_vtuber/agent/agent_factory.py +132 -0
  50. src/open_llm_vtuber/agent/agents/__init__.py +0 -0
.cursor/rules/olv-core-rules.mdc ADDED
@@ -0,0 +1,111 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ alwaysApply: true
3
+ ---
4
+
5
+
6
+ # Open-LLM-VTuber AI Coding Assistant: Context & Guidelines
7
+
8
+ `version: 2025.08.05-1`
9
+
10
+ ## 1. Core Project Context
11
+
12
+ - **Project:** Open-LLM-VTuber, a low-latency voice-based LLM interaction tool.
13
+ - **Language:** Python >= 3.10
14
+ - **Core Tech Stack:**
15
+ - **Backend:** FastAPI, Pydantic v2, Uvicorn, fully async
16
+ - **Real-time Communication:** WebSockets
17
+ - **Package Management:** `uv` (version ~= 0.8, as of 2025 August) (always use `uv run`, `uv sync`, `uv add`, `uv remove` to do stuff instead of `pip`)
18
+ - **Primary Goal:** Achieve end-to-end latency below 500ms (user speaks -> AI voice heard). Performance is critical.
19
+ - **Key Principles:**
20
+ - **Offline-Ready:** Core functionality MUST work without an internet connection.
21
+ - **Separation of Concerns:** Strict frontend-backend separation.
22
+ - **Clean code:** Clean, testable, maintainable code, follows best practices of python 3.10+ and does not write deprecated code.
23
+
24
+ Some key files and directories:
25
+
26
+ ```
27
+ doc/ # A deprecated directory
28
+ frontend/ # Compiled web frontend artifacts (from git submodule)
29
+ config_templates/
30
+ conf.default.yaml # Configuration template for English users
31
+ conf.ZH.default.yaml # Configuration template for Chinese users
32
+ src/open_llm_vtuber/ # Project source code
33
+ config_manager/
34
+ main.py # Pydantic models for configuration validation
35
+ run_server.py # Entrypoint to start the application
36
+ conf.yaml # User's configuration file, generated from a template
37
+ ```
38
+
39
+ ### 1.1. Repository Structure
40
+
41
+ - Frontend Repository: The frontend is a React application developed in a separate repository: `Open-LLM-VTuber-Web`. Its built artifacts are integrated into the `frontend/` directory of this backend repository via a git submodule.
42
+
43
+ - Documentation Repository: The official documentation site is hosted in the `open-llm-vtuber.github.io` repository. When asked to generate documentation, create Markdown files in the project root. The user will be responsible for migrating them to the documentation site.
44
+
45
+ ### 1.2. Configuration Files
46
+
47
+ - Configuration templates are located in the `config_templates/` directory:
48
+ - `conf.default.yaml`: Template for English-speaking users.
49
+ - `conf.ZH.default.yaml`: Template for Chinese-speaking users.
50
+ - When modifying the configuration structure, both template files MUST be updated accordingly.
51
+ - Configuration is validated on load using the Pydantic models defined in `src/open_llm_vtuber/config_manager/main.py`. Any changes to configuration options must be reflected in these models.
52
+
53
+ ## 2. Overarching Coding Philosophy
54
+
55
+ - **Simplicity and Readability:** Write code that is simple, clear, and easy to understand. Avoid unnecessary complexity or premature optimization. Follow the Zen of Python.
56
+ - **Single Responsibility:** Each function, class, and module should do one thing and do it well.
57
+ - **Performance-Aware:** Be mindful of performance. Avoid blocking operations in async contexts. Use efficient data structures and algorithms where it matters.
58
+ - **Adherence to Best Practices**: Write clean, testable, and robust code that follows modern Python 3.10+ idioms. Adhere to the best practices of our core libraries (FastAPI, Pydantic v2).
59
+
60
+ ## 3. Detailed Coding Standards
61
+
62
+ ### 3.1. Formatting & Linting (Ruff)
63
+
64
+ - All Python code **MUST** be formatted with `uv run ruff format`.
65
+ - All Python code **MUST** pass `uv run ruff check` without errors.
66
+ - Import statements should be grouped by standard library, third-party, and local modules and sorted alphabetically (PEP 8).
67
+
68
+ ### 3.2. Naming Conventions (PEP 8)
69
+
70
+ - Use `snake_case` for all variables, functions, methods, and module names.
71
+ - Use `PascalCase` for class names.
72
+ - Choose descriptive names. Avoid single-letter names except for loop counters or well-known initialisms.
73
+
74
+ ### 3.3. Type Hints (CRITICAL)
75
+
76
+ - Target Python 3.10+. Use modern type hint syntax.
77
+ - **DO:** Use `|` for unions (e.g., `str | None`).
78
+ - **DON'T:** Use `Optional` from `typing` (e.g., `Optional[str]`).
79
+ - **DO:** Use built-in generics (e.g., `list[int]`, `dict[str, float]`).
80
+ - **DON'T:** Use capitalized types from `typing` (e.g., `List[int]`, `Dict[str, float]`).
81
+ - All function and method signatures (arguments and return values) **MUST** have accurate type hints. If third party libraries made it impossible to fix type errors, suppress the type checker.
82
+
83
+ ### 3.4. Docstrings & Comments (CRITICAL)
84
+
85
+ - All public modules, functions, classes, and methods **MUST** have a docstring in English.
86
+ - Use the **Google Python Style** for docstrings.
87
+ - Docstrings **MUST** include:
88
+ 1. Summary.
89
+ 2. `Args:` section describing each parameter, its type, and its purpose.
90
+ 3. `Returns:` section describing the return value, its type, and its meaning.
91
+ 4. (Optional but encouraged) `Raises:` section for any exceptions thrown.
92
+ - All other code comments must also be in English.
93
+
94
+ ### 3.5. Logging
95
+
96
+ - Use the `loguru` module for all informational or error output.
97
+ - Log messages should be in English, clear, and informative. Use emoji when appropriate.
98
+
99
+ ## 4. Architectural Principles
100
+
101
+ ### 4.1. Dependency Management
102
+
103
+ - First, try to solve the problem using the Python standard library or existing project dependencies defined in `pyproject.toml`.
104
+ - If a new dependency is required, it must have a compatible license and be well-maintained.
105
+ - Use `uv add`, `uv remove`, `uv run` instead of pip to manage dependencies. If user uses conda, install uv with pip then.
106
+ - After adding a new dependency, in addition to `pyproject.toml`, you must add the dependency to `requirements.txt` as well.
107
+
108
+ ### 4.2. Cross-Platform Compatibility
109
+
110
+ - All core logic **MUST** run on macOS, Windows, and Linux.
111
+ - If a feature is platform-specific (e.g., uses a Windows-only API) or hardware-specific (e.g., CUDA), it **MUST** be an optional component. The application should start and run core features even if that component is not available. Use graceful fallbacks or clear error messages.
.dockerignore ADDED
@@ -0,0 +1,79 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # User config & backup
2
+ /conf.yaml
3
+ # Avatars
4
+ # avatars/*
5
+ # Backgrounds
6
+ # backgrounds/*
7
+
8
+ /conf.yaml.backup
9
+
10
+ # Default templates retained
11
+ !/config_templates/**
12
+
13
+ # Live2D models - ignore non-defaults
14
+ live2d-models/*
15
+ !live2d-models/mao_pro/**
16
+ !live2d-models/shizuku/**
17
+
18
+ # Characters
19
+ characters/*
20
+ # All default characters tracked via git
21
+
22
+
23
+
24
+ # System files
25
+ .DS_Store
26
+
27
+ # Python cache
28
+ __pycache__/
29
+ *.pyc
30
+
31
+ # IDE & local files
32
+ /.idea/
33
+ lab.py
34
+
35
+ # Virtual envs
36
+ .venv/
37
+ .conda/
38
+ conda/
39
+
40
+ # API keys, secrets
41
+ .env
42
+ api_keys.py
43
+ src/open_llm_vtuber/llm/user_credentials.json
44
+
45
+ # Database
46
+ memory.db*
47
+ mem.json
48
+
49
+ # Logs
50
+ server.log
51
+ logs/*
52
+
53
+ # Cache & models
54
+ cache/*
55
+ asset/
56
+ models/*
57
+ !models/piper_voice/**
58
+ src/open_llm_vtuber/tts/asset/
59
+ src/open_llm_vtuber/tts/config/
60
+ src/open_llm_vtuber/asr/models/*
61
+ !src/open_llm_vtuber/asr/models/silero_vad.onnx
62
+
63
+ # Misc
64
+ tmp/
65
+ private/
66
+ legacy/
67
+ chat_history/
68
+ knowledge_base/
69
+ submodules/MeloTTS
70
+ openapi_*.json
71
+
72
+ # Windows builds
73
+ *.exe
74
+
75
+ # Packaging artifacts
76
+ *.egg-info/
77
+ build/
78
+ dist/
79
+ .cache/
.gemini/GEMINI.md ADDED
@@ -0,0 +1,106 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Open-LLM-VTuber AI Coding Assistant: Context & Guidelines
2
+
3
+ `version: 2025.08.05-1`
4
+
5
+ ## 1. Core Project Context
6
+
7
+ - **Project:** Open-LLM-VTuber, a low-latency voice-based LLM interaction tool.
8
+ - **Language:** Python >= 3.10
9
+ - **Core Tech Stack:**
10
+ - **Backend:** FastAPI, Pydantic v2, Uvicorn, fully async
11
+ - **Real-time Communication:** WebSockets
12
+ - **Package Management:** `uv` (version ~= 0.8, as of 2025 August) (always use `uv run`, `uv sync`, `uv add`, `uv remove` to do stuff instead of `pip`)
13
+ - **Primary Goal:** Achieve end-to-end latency below 500ms (user speaks -> AI voice heard). Performance is critical.
14
+ - **Key Principles:**
15
+ - **Offline-Ready:** Core functionality MUST work without an internet connection.
16
+ - **Separation of Concerns:** Strict frontend-backend separation.
17
+ - **Clean code:** Clean, testable, maintainable code, follows best practices of python 3.10+ and does not write deprecated code.
18
+
19
+ Some key files and directories:
20
+
21
+ ```
22
+ doc/ # A deprecated directory
23
+ frontend/ # Compiled web frontend artifacts (from git submodule)
24
+ config_templates/
25
+ conf.default.yaml # Configuration template for English users
26
+ conf.ZH.default.yaml # Configuration template for Chinese users
27
+ src/open_llm_vtuber/ # Project source code
28
+ config_manager/
29
+ main.py # Pydantic models for configuration validation
30
+ run_server.py # Entrypoint to start the application
31
+ conf.yaml # User's configuration file, generated from a template
32
+ ```
33
+
34
+ ### 1.1. Repository Structure
35
+
36
+ - Frontend Repository: The frontend is a React application developed in a separate repository: `Open-LLM-VTuber-Web`. Its built artifacts are integrated into the `frontend/` directory of this backend repository via a git submodule.
37
+
38
+ - Documentation Repository: The official documentation site is hosted in the `open-llm-vtuber.github.io` repository. When asked to generate documentation, create Markdown files in the project root. The user will be responsible for migrating them to the documentation site.
39
+
40
+ ### 1.2. Configuration Files
41
+
42
+ - Configuration templates are located in the `config_templates/` directory:
43
+ - `conf.default.yaml`: Template for English-speaking users.
44
+ - `conf.ZH.default.yaml`: Template for Chinese-speaking users.
45
+ - When modifying the configuration structure, both template files MUST be updated accordingly.
46
+ - Configuration is validated on load using the Pydantic models defined in `src/open_llm_vtuber/config_manager/main.py`. Any changes to configuration options must be reflected in these models.
47
+
48
+ ## 2. Overarching Coding Philosophy
49
+
50
+ - **Simplicity and Readability:** Write code that is simple, clear, and easy to understand. Avoid unnecessary complexity or premature optimization. Follow the Zen of Python.
51
+ - **Single Responsibility:** Each function, class, and module should do one thing and do it well.
52
+ - **Performance-Aware:** Be mindful of performance. Avoid blocking operations in async contexts. Use efficient data structures and algorithms where it matters.
53
+ - **Adherence to Best Practices**: Write clean, testable, and robust code that follows modern Python 3.10+ idioms. Adhere to the best practices of our core libraries (FastAPI, Pydantic v2).
54
+
55
+ ## 3. Detailed Coding Standards
56
+
57
+ ### 3.1. Formatting & Linting (Ruff)
58
+
59
+ - All Python code **MUST** be formatted with `uv run ruff format`.
60
+ - All Python code **MUST** pass `uv run ruff check` without errors.
61
+ - Import statements should be grouped by standard library, third-party, and local modules and sorted alphabetically (PEP 8).
62
+
63
+ ### 3.2. Naming Conventions (PEP 8)
64
+
65
+ - Use `snake_case` for all variables, functions, methods, and module names.
66
+ - Use `PascalCase` for class names.
67
+ - Choose descriptive names. Avoid single-letter names except for loop counters or well-known initialisms.
68
+
69
+ ### 3.3. Type Hints (CRITICAL)
70
+
71
+ - Target Python 3.10+. Use modern type hint syntax.
72
+ - **DO:** Use `|` for unions (e.g., `str | None`).
73
+ - **DON'T:** Use `Optional` from `typing` (e.g., `Optional[str]`).
74
+ - **DO:** Use built-in generics (e.g., `list[int]`, `dict[str, float]`).
75
+ - **DON'T:** Use capitalized types from `typing` (e.g., `List[int]`, `Dict[str, float]`).
76
+ - All function and method signatures (arguments and return values) **MUST** have accurate type hints. If third party libraries made it impossible to fix type errors, suppress the type checker.
77
+
78
+ ### 3.4. Docstrings & Comments (CRITICAL)
79
+
80
+ - All public modules, functions, classes, and methods **MUST** have a docstring in English.
81
+ - Use the **Google Python Style** for docstrings.
82
+ - Docstrings **MUST** include:
83
+ 1. Summary.
84
+ 2. `Args:` section describing each parameter, its type, and its purpose.
85
+ 3. `Returns:` section describing the return value, its type, and its meaning.
86
+ 4. (Optional but encouraged) `Raises:` section for any exceptions thrown.
87
+ - All other code comments must also be in English.
88
+
89
+ ### 3.5. Logging
90
+
91
+ - Use the `loguru` module for all informational or error output.
92
+ - Log messages should be in English, clear, and informative. Use emoji when appropriate.
93
+
94
+ ## 4. Architectural Principles
95
+
96
+ ### 4.1. Dependency Management
97
+
98
+ - First, try to solve the problem using the Python standard library or existing project dependencies defined in `pyproject.toml`.
99
+ - If a new dependency is required, it must have a compatible license and be well-maintained.
100
+ - Use `uv add`, `uv remove`, `uv run` instead of pip to manage dependencies. If user uses conda, install uv with pip then.
101
+ - After adding a new dependency, in addition to `pyproject.toml`, you must add the dependency to `requirements.txt` as well.
102
+
103
+ ### 4.2. Cross-Platform Compatibility
104
+
105
+ - All core logic **MUST** run on macOS, Windows, and Linux.
106
+ - If a feature is platform-specific (e.g., uses a Windows-only API) or hardware-specific (e.g., CUDA), it **MUST** be an optional component. The application should start and run core features even if that component is not available. Use graceful fallbacks or clear error messages.
.gemini/styleguide.md ADDED
@@ -0,0 +1,165 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ version: 2025.08.04-1-en
2
+
3
+ # Pull Request Guide & Checklist
4
+
5
+ Welcome, and thank you for choosing to contribute to the Open-LLM-VTuber project! We are deeply grateful for the effort of every contributor.
6
+
7
+ This guide is designed to help all contributors, maintainers, and even LLMs collaborate effectively, ensuring the project's high quality, maintainability, and long-term health. Please refer to this guide both when submitting a Pull Request (PR) and when reviewing PRs from others.
8
+
9
+ We believe that clear standards and processes are not only the cornerstone of project maintenance but also an excellent opportunity for us to learn and grow together.
10
+
11
+ ⚠️ The coding standards mentioned below apply primarily to new code submissions. Some legacy code may not currently pass all type checks. We are working to fix this incrementally, but it will take time. When encountering type errors reported by the type checker, please focus only on the parts of the code your PR modifies. Adhere to principle **A1 (A PR should do one thing)**. If you wish to help fix existing type errors, please open a separate PR for that purpose.
12
+
13
+ ---
14
+
15
+ ### A. The Golden Rule: Atomic PRs
16
+
17
+ This is our most important principle. Please adhere to it strictly.
18
+
19
+ **A1. A single PR should do one thing, and one thing only.**
20
+
21
+ * **Good examples 👍:**
22
+ * `fix: Resolve audio stuttering on macOS`
23
+ * `feat: Add OpenAI TTS support`
24
+ * `refactor: Rework the audio_processing module`
25
+ * **Bad examples 👎:**
26
+ * `fix: Resolve bug A, bug B, and implement feature C`
27
+
28
+ **Why is this so important?**
29
+
30
+ * **Easy to Review:** Small, focused PRs allow reviewers to understand your changes more quickly and deeply, leading to higher-quality feedback. As stated in *The Pragmatic Programmer*, "Tip 38: It's Easier to Change Sooner." Small PRs facilitate rapid feedback loops.
31
+ * **Easy to Track:** When a problem arises in the future, a clean Git history (thanks to `git bisect`) allows us to quickly pinpoint the exact change that introduced the issue.
32
+ * **Easy to Revert:** If a small change introduces a bug, we can easily revert it without impacting other unrelated features or fixes.
33
+
34
+ ### B. Contributor's Checklist: Submitting My PR
35
+
36
+ Before you submit your PR, please confirm each of the following items. This not only significantly speeds up the merge process but is also a sign of respect for your own work and for your fellow collaborators.
37
+
38
+ #### B1. PR Title & Description
39
+
40
+ * [ ] **B1.1: Clear Title:** The title should concisely summarize the core content of the PR. For example: `feat: Add OpenAI TTS support` or `fix: Resolve audio stuttering on macOS`. Remember, a PR should only do one thing (A1).
41
+ * [ ] **B1.2: Complete Description:** The description area should clearly explain:
42
+ * **What:** Briefly describe the purpose and context of this PR.
43
+ * **Why:** Explain the necessity of this change. If it's a bug fix, please link to the relevant Issue.
44
+ * **How:** Briefly outline the technical implementation approach.
45
+ * **How to Test:** Provide clear, step-by-step instructions so that reviewers can reproduce and verify your work.
46
+
47
+ #### B2. Code Quality Self-Check
48
+
49
+ * [ ] **B2.1: Atomicity:** Does my PR strictly adhere to the **A1** principle?
50
+ * [ ] **B2.2: Formatting & Linting:** Have I run and passed the following commands locally?
51
+ ```bash
52
+ uv run ruff format
53
+ uv run ruff check
54
+ ```
55
+ * [ ] **B2.3: Naming Conventions:** Do all variable, function, and module names follow **D3.2**? (i.e., PEP 8's `snake_case` style).
56
+ * [ ] **B2.4: Type Hints & Docstrings:**
57
+ * [ ] **B2.4.1:** Do all new or modified functions include Type Hints compliant with **D3.3**?
58
+ * [ ] **B2.4.2:** Do all new or modified functions include English Docstrings compliant with **D3.3**?
59
+ * [ ] **B2.5: Dependency Management:** If I've added a new third-party library, have I carefully considered and followed the principles in **D5. Dependency Management**?
60
+ * [ ] **B2.6: Cross-Platform Compatibility:** Does my code run correctly on macOS, Windows, and Linux? If I've introduced components specific to a platform or GPU, have I made them optional?
61
+ * [ ] **B2.7: Comment Language:** Are all in-code comments, Docstrings, and console outputs in English? (This excludes i18n localization implementations, but English must be the default).
62
+
63
+ #### B3. Functional & Logical Self-Check
64
+
65
+ * [ ] **B3.1: Functional Testing:** Have I thoroughly tested my changes locally to ensure they work as expected and do not introduce new bugs?
66
+ * [ ] **B3.2: Alignment with Project Goals:** Do my changes align with the **D1. Core Project Goals** and not conflict with the **D2. Future Project Goals**?
67
+
68
+ #### B4. Documentation Update
69
+
70
+ * [ ] **B4.1: Documentation Sync:** If my PR introduces a new feature, a new configuration option, or any change that users need to be aware of, have I updated the relevant documentation in the docs repository (https://github.com/Open-LLM-VTuber/open-llm-vtuber.github.io)? (No exceptions).
71
+ * [ ] **B4.2: Changelog Entry:** (Optional, but recommended) Add a brief entry for your change under the "Unreleased" section in `CHANGELOG.md`.
72
+
73
+ ### C. Maintainer's Checklist: Reviewing a PR
74
+
75
+ For the long-term health of the project, please carefully check the following items during a code review. You can reference these item numbers directly (e.g., "Regarding C2.1, I believe the maintenance cost of this feature might outweigh its benefits...") to initiate a discussion.
76
+
77
+ * [ ] **C1. Understand the Change:** Have I fully read and understood all the code and the intent behind this PR?
78
+ * [ ] **C2. Strategic Alignment:**
79
+ * [ ] **C2.1: Necessity vs. Maintenance Cost:** Is this feature truly necessary? Does the value it provides justify the future maintenance cost we will incur? As Fred Brooks wrote in *The Mythical Man-Month*, "the conceptual integrity of the product... is the most important consideration in system design."
80
+ * [ ] **C2.2: Core Goal Alignment:** Does it fully align with the **D1. Core Project Goals**?
81
+ * [ ] **C2.3: Future Goal Alignment:** Is it consistent with, or at least not in conflict with, the **D2. Future Project Goals** and the project roadmap?
82
+ * [ ] **C3. Implementation Quality:**
83
+ * [ ] **C3.1: Design Elegance:** Is the implementation sufficiently "simple" and "elegant"? Is there any over-engineering or premature optimization? "Simplicity is the ultimate sophistication." - Leonardo da Vinci.
84
+ * [ ] **C3.2: Maintainability:** Is the code modular, loosely coupled, easy to understand, and testable?
85
+ * [ ] **C3.3: Technical Detail Check:** Have all items from the contributor's self-checklist (**B2, B3, B4**) been met? (e.g., Are Type Hints accurate? Are Docstrings clear? Do Ruff checks pass?).
86
+ * [ ] **C4. Documentation Completeness:** Has the relevant documentation been created or updated, and is its content clear and accurate?
87
+
88
+ ### D. Project Reference Standards
89
+
90
+ This section details our core values and technical specifications, which serve as the basis for all the checklists above.
91
+
92
+ #### D1. Core Project Goals
93
+
94
+ * **D1.1. Offline Operation:** The project's core functionality must support fully offline operation. Any feature requiring an internet connection must be an optional module.
95
+ * **D1.2. Frontend-Backend Separation:** Strictly adhere to a separated frontend-backend architecture to facilitate independent development and maintenance.
96
+ * **D1.3. Cross-Platform:** Core backend components must run on macOS, Windows, and Linux via CPU. Any component dependent on a specific platform or GPU must be optional.
97
+ * **D1.4. Updatability:** Users should be able to upgrade smoothly via an update script. Any Breaking Changes must be accompanied by a major version bump (e.g., v1 -> v2) and a switch to a new release branch.
98
+ * **D1.5. Maintainability:** The code must be simple, modular, decoupled, testable, and follow best practices.
99
+
100
+ #### D2. Future Project Goals
101
+
102
+ We are moving in the following directions. All new contributions should strive to align with these goals (though it's not strictly mandatory, as these goals will likely be implemented together in a future v2 refactor).
103
+
104
+ * **D2.1. GUI for Settings:** Gradually replace traditional `yaml` configuration files with a GUI-based settings interface.
105
+ * **D2.2. Plugin Architecture:** Build a plugin-based ecosystem, using a Launcher service to manage and run modules like ASR/TTS/LLM via a GUI.
106
+ * **D2.3. Stable API:** Provide a stable and reliable backend API for plugins and the frontend to consume.
107
+ * **D2.4. Automated Testing:** Comprehensively adopt `pytest`-based automated testing. New code should be designed with testability in mind.
108
+
109
+ #### D3. Detailed Coding Standards
110
+
111
+ **D3.1. Linter & Formatter**
112
+ We use **Ruff** to unify code style and check for potential issues. All submitted code must pass both `ruff format` and `ruff check`.
113
+
114
+ **D3.2. Naming Conventions**
115
+ * Follow Python's **PEP 8** style guide.
116
+ * Use **snake_case** for naming variables, functions, and modules.
117
+ * Names should be clear, descriptive, and unambiguous. Avoid single-letter variable names (except for loop counters).
118
+
119
+ **D3.3. Type Hints & Docstrings**
120
+ * **Why are they important?** Type Hints and Docstrings are the "manual" for your code. They help:
121
+ * Other developers to quickly understand your code.
122
+ * IDEs and static analysis tools (like VSCode, Ruff) to perform smarter error checking and code completion.
123
+ * You, months from now, to understand the code you wrote yourself.
124
+ * **Type Hint Requirements:**
125
+ * All function/method parameters and return values **must** include Type Hints.
126
+ * The project targets **Python 3.10+**. Please use modern syntax, such as `str | None` instead of `Optional[str]`, and `list[str]` instead of `List[str]` (as per [PEP 604](https://peps.python.org/pep-0604/) and [PEP 585](https://peps.python.org/pep-0585/)).
127
+ * Type Hints must be accurate. It is recommended to set VSCode's Python type checker to `basic` or `strict` mode for validation.
128
+ * **Docstring Requirements:**
129
+ * All new or significantly modified public functions, methods, and classes **must** include an English Docstring.
130
+ * We recommend the **Google style Docstring format**. It should include at least:
131
+ * **Summary:** A one-line summary of the function's purpose.
132
+ * **Args:** A description of each parameter's type and meaning.
133
+ * **Returns:** A description of the return value's type and meaning.
134
+ * **Example:**
135
+ ```python
136
+ def add(a: int, b: int) -> int:
137
+ """Calculates the sum of two integers.
138
+
139
+ Args:
140
+ a: The first integer.
141
+ b: The second integer.
142
+
143
+ Returns:
144
+ The sum of a and b.
145
+ """
146
+ return a + b
147
+ ```
148
+
149
+ #### D4. Architectural Principles
150
+
151
+ * **D4.1. ASR/LLM/TTS Module Design:** When a library supports multiple models with vastly different configurations, prioritize user experience and ease of understanding.
152
+ * It is recommended to encapsulate each complex model into a separate, independent module (e.g., `asr-whisper-api`, `asr-funasr`) rather than treating the entire library as one monolithic module. This simplifies user configuration and clarifies responsibilities.
153
+
154
+ #### D5. Dependency Management Principles
155
+
156
+ * **D5.1. Every new dependency must be carefully considered.**
157
+ * Can this functionality be achieved with the standard library or an existing dependency?
158
+ * Is the dependency's license compatible with our project?
159
+ * Is the dependency's community active? How is its maintenance status? Is it secure and trustworthy? Does it pose a risk of supply chain attacks?
160
+
161
+ ---
162
+
163
+ Thank you for taking the time to read this guide. We look forward to your contribution!
164
+
165
+ Finally, regarding the PR review process, please be patient. Our project is understaffed, and the core maintainers are also quite busy, so reviews may take some time. If a week passes without any response, I apologize in advance—I may have simply forgotten. Please feel free to ping me (@t41372) or other relevant maintainers in the Pull Request to remind us.
.gitattributes CHANGED
@@ -1,35 +1 @@
1
- *.7z filter=lfs diff=lfs merge=lfs -text
2
- *.arrow filter=lfs diff=lfs merge=lfs -text
3
- *.bin filter=lfs diff=lfs merge=lfs -text
4
- *.bz2 filter=lfs diff=lfs merge=lfs -text
5
- *.ckpt filter=lfs diff=lfs merge=lfs -text
6
- *.ftz filter=lfs diff=lfs merge=lfs -text
7
- *.gz filter=lfs diff=lfs merge=lfs -text
8
- *.h5 filter=lfs diff=lfs merge=lfs -text
9
- *.joblib filter=lfs diff=lfs merge=lfs -text
10
- *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
- *.mlmodel filter=lfs diff=lfs merge=lfs -text
12
- *.model filter=lfs diff=lfs merge=lfs -text
13
- *.msgpack filter=lfs diff=lfs merge=lfs -text
14
- *.npy filter=lfs diff=lfs merge=lfs -text
15
- *.npz filter=lfs diff=lfs merge=lfs -text
16
- *.onnx filter=lfs diff=lfs merge=lfs -text
17
- *.ot filter=lfs diff=lfs merge=lfs -text
18
- *.parquet filter=lfs diff=lfs merge=lfs -text
19
- *.pb filter=lfs diff=lfs merge=lfs -text
20
- *.pickle filter=lfs diff=lfs merge=lfs -text
21
- *.pkl filter=lfs diff=lfs merge=lfs -text
22
- *.pt filter=lfs diff=lfs merge=lfs -text
23
- *.pth filter=lfs diff=lfs merge=lfs -text
24
- *.rar filter=lfs diff=lfs merge=lfs -text
25
- *.safetensors filter=lfs diff=lfs merge=lfs -text
26
- saved_model/**/* filter=lfs diff=lfs merge=lfs -text
27
- *.tar.* filter=lfs diff=lfs merge=lfs -text
28
- *.tar filter=lfs diff=lfs merge=lfs -text
29
- *.tflite filter=lfs diff=lfs merge=lfs -text
30
- *.tgz filter=lfs diff=lfs merge=lfs -text
31
- *.wasm filter=lfs diff=lfs merge=lfs -text
32
- *.xz filter=lfs diff=lfs merge=lfs -text
33
- *.zip filter=lfs diff=lfs merge=lfs -text
34
- *.zst filter=lfs diff=lfs merge=lfs -text
35
- *tfevents* filter=lfs diff=lfs merge=lfs -text
 
1
+ static/libs/* linguist-vendored
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
.github/FUNDING.yml ADDED
@@ -0,0 +1,14 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # These are supported funding model platforms
2
+
3
+ github: # Replace with up to 4 GitHub Sponsors-enabled usernames e.g., [user1, user2]
4
+ patreon: # Replace with a single Patreon username
5
+ open_collective: # Replace with a single Open Collective username
6
+ ko_fi: # Replace with a single Ko-fi username
7
+ tidelift: # Replace with a single Tidelift platform-name/package-name e.g., npm/babel
8
+ community_bridge: # Replace with a single Community Bridge project-name e.g., cloud-foundry
9
+ liberapay: # Replace with a single Liberapay username
10
+ issuehunt: # Replace with a single IssueHunt username
11
+ lfx_crowdfunding: # Replace with a single LFX Crowdfunding project-name e.g., cloud-foundry
12
+ polar: # Replace with a single Polar username
13
+ buy_me_a_coffee: yi.ting
14
+ custom: # Replace with up to 4 custom sponsorship URLs e.g., ['link1', 'link2']
.github/ISSUE_TEMPLATE/bug---question---get-help---bug---提问---求助.md ADDED
@@ -0,0 +1,79 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ name: Bug & Question & Get Help | Bug & 提问 & 求助
3
+ about: Describe this issue template's purpose here. 请描述你遇到的问题
4
+ title: "[GET HELP] "
5
+ labels: question
6
+ assignees: ''
7
+
8
+ ---
9
+
10
+ ### 1. Checklist / 检查项
11
+
12
+ - [ ] I have removed sensitive information from configuration/logs.
13
+
14
+ 我已移除配置或日志中的敏感信息。
15
+
16
+ - [ ] I have checked the [FAQ](https://docs.llmvtuber.com/docs/faq/) and [existing issues](https://github.com/Open-LLM-VTuber/Open-LLM-VTuber/issues).
17
+
18
+ 我已查阅[常见问题](https://docs.llmvtuber.com/docs/faq/)和[已有 issue](https://github.com/Open-LLM-VTuber/Open-LLM-VTuber/issues)。
19
+
20
+ - [ ] I am using the latest version of the project.
21
+
22
+ 我正在使用项目的最新版本。
23
+
24
+
25
+ ---
26
+
27
+ ### 2. Environment Details / 环境信息
28
+
29
+ - How did you install Open-LLM-VTuber:
30
+
31
+ 你是如何安装 Open-LLM-VTuber 的:
32
+
33
+ - [ ] git clone (源码克隆)
34
+ - [ ] release zip (发布包)
35
+ - [ ] exe (Windows) (Windows 安装包)
36
+ - [ ] dmg (MacOS) (MacOS 安装包)
37
+ - Are you running the backend and frontend on the same device?
38
+
39
+ 后端和前端是否在同一台设备上运行?
40
+
41
+ - If you used GPU, please provide your GPU model and driver version:
42
+
43
+ 如果你使用了 GPU,请提供你的 GPU 型号及驱动版本信息:
44
+
45
+ - Browser (if applicable):
46
+
47
+ 浏览器(如果适用):
48
+
49
+ ---
50
+
51
+ ### 3. Describe the bug / 问题描述
52
+
53
+ What exactly is happening? What do you want to see? How to reproduce?
54
+
55
+ 请详细描述发生了什么、你希望看到什么,以及如何复现。
56
+
57
+ ---
58
+
59
+ ### 4. Screenshots / Logs (if relevant)
60
+
61
+ 截图 / 日志(如有)
62
+
63
+ - Backend log: 后端日志
64
+ - Frontend setting (General): 前端设置(通用)
65
+ - Frontend console log (F12): 前端控制台日志(F12)
66
+ - If using Ollama: output of `ollama ps`:
67
+ 如果使用 Ollama,请附上 `ollama ps` 的输出
68
+
69
+ ---
70
+
71
+ ### 5. Configuration / 配置文件
72
+
73
+ > Please provide relevant config files, with sensitive info like API keys removed
74
+ >
75
+ >
76
+ > 请提供相关配置文件(请务必去除 API key 等敏感信息)
77
+ >
78
+ - `conf.yaml`
79
+ - `model_dict.json`, `.model3.json`
.github/ISSUE_TEMPLATE/feature-request---功能建议.md ADDED
@@ -0,0 +1,44 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ name: Feature request / 功能建议
3
+ about: Suggest an idea for this project / 提出改善项目的建议
4
+ title: "[IDEA]"
5
+ labels: enhancement
6
+ assignees: ''
7
+
8
+ ---
9
+
10
+ ### 这个功能请求是用来解决什么问题的? / Is your feature request related to a problem? Please describe.
11
+ *请清晰简洁地描述您遇到的问题。例如:我总是在 [...] 时感到不方便。*
12
+ *A clear and concise description of what the problem is. Ex. I'm always frustrated when [...] *
13
+
14
+ [在这里输入问题描述 / Type problem description here]
15
+
16
+ ### 您期望的解决方案是什么? / Describe the solution you'd like
17
+ *请清晰简洁地描述您希望实现的功能或效果。*
18
+ *A clear and concise description of what you want to happen.*
19
+
20
+ [在此处输入期望的解决方案 / Type desired solution here]
21
+
22
+ ### 此功能为何对 Open-LLM-VTuber 很重要? / Why is this important for Open-LLM-VTuber?
23
+ *请解释为什么这个功能对 Open-LLM-VTuber 项目来说是实用且重要的。它能带来什么价值?例如,它如何提升用户体验、扩展项目能力、解决核心痛点等。*
24
+ *Explain why this feature would be useful and significant for the Open-LLM-VTuber project. What value does it add? For example, how does it improve user experience, extend project capabilities, or solve core pain points?*
25
+
26
+ [在此处说明其重要性 / Explain its importance here]
27
+
28
+ ### 您考虑过哪些替代方案? / Describe alternatives you've considered
29
+ *请清晰简洁地描述您考虑过的任何替代解决方案或特性。*
30
+ *A clear and concise description of any alternative solutions or features you've considered.*
31
+
32
+ [在此处输入替代方案 / Type alternatives here]
33
+
34
+ ### 您是否愿意参与开发此功能? / Would you like to work on this issue?
35
+ *请回答 Yes 或 No。如果您愿意,我们可以讨论后续步骤。*
36
+ *Please answer Yes or No. If yes, we can discuss the next steps.*
37
+
38
+ [回答 Yes/No / Answer Yes/No]
39
+
40
+ ### 补充信息 / Additional context
41
+ *在此处添加有关此功能请求的任何其他上下文、截图、日志或设计稿。*
42
+ *Add any other context, screenshots, logs, or mockups about the feature request here.*
43
+
44
+ [在此处添加补充信息 / Add additional context here]
.github/copilot-instructions.md ADDED
@@ -0,0 +1,106 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Open-LLM-VTuber AI Coding Assistant: Context & Guidelines
2
+
3
+ `version: 2025.08.05-1`
4
+
5
+ ## 1. Core Project Context
6
+
7
+ - **Project:** Open-LLM-VTuber, a low-latency voice-based LLM interaction tool.
8
+ - **Language:** Python >= 3.10
9
+ - **Core Tech Stack:**
10
+ - **Backend:** FastAPI, Pydantic v2, Uvicorn, fully async
11
+ - **Real-time Communication:** WebSockets
12
+ - **Package Management:** `uv` (version ~= 0.8, as of 2025 August) (always use `uv run`, `uv sync`, `uv add`, `uv remove` to do stuff instead of `pip`)
13
+ - **Primary Goal:** Achieve end-to-end latency below 500ms (user speaks -> AI voice heard). Performance is critical.
14
+ - **Key Principles:**
15
+ - **Offline-Ready:** Core functionality MUST work without an internet connection.
16
+ - **Separation of Concerns:** Strict frontend-backend separation.
17
+ - **Clean code:** Clean, testable, maintainable code, follows best practices of python 3.10+ and does not write deprecated code.
18
+
19
+ Some key files and directories:
20
+
21
+ ```
22
+ doc/ # A deprecated directory
23
+ frontend/ # Compiled web frontend artifacts (from git submodule)
24
+ config_templates/
25
+ conf.default.yaml # Configuration template for English users
26
+ conf.ZH.default.yaml # Configuration template for Chinese users
27
+ src/open_llm_vtuber/ # Project source code
28
+ config_manager/
29
+ main.py # Pydantic models for configuration validation
30
+ run_server.py # Entrypoint to start the application
31
+ conf.yaml # User's configuration file, generated from a template
32
+ ```
33
+
34
+ ### 1.1. Repository Structure
35
+
36
+ - Frontend Repository: The frontend is a React application developed in a separate repository: `Open-LLM-VTuber-Web`. Its built artifacts are integrated into the `frontend/` directory of this backend repository via a git submodule.
37
+
38
+ - Documentation Repository: The official documentation site is hosted in the `open-llm-vtuber.github.io` repository. When asked to generate documentation, create Markdown files in the project root. The user will be responsible for migrating them to the documentation site.
39
+
40
+ ### 1.2. Configuration Files
41
+
42
+ - Configuration templates are located in the `config_templates/` directory:
43
+ - `conf.default.yaml`: Template for English-speaking users.
44
+ - `conf.ZH.default.yaml`: Template for Chinese-speaking users.
45
+ - When modifying the configuration structure, both template files MUST be updated accordingly.
46
+ - Configuration is validated on load using the Pydantic models defined in `src/open_llm_vtuber/config_manager/main.py`. Any changes to configuration options must be reflected in these models.
47
+
48
+ ## 2. Overarching Coding Philosophy
49
+
50
+ - **Simplicity and Readability:** Write code that is simple, clear, and easy to understand. Avoid unnecessary complexity or premature optimization. Follow the Zen of Python.
51
+ - **Single Responsibility:** Each function, class, and module should do one thing and do it well.
52
+ - **Performance-Aware:** Be mindful of performance. Avoid blocking operations in async contexts. Use efficient data structures and algorithms where it matters.
53
+ - **Adherence to Best Practices**: Write clean, testable, and robust code that follows modern Python 3.10+ idioms. Adhere to the best practices of our core libraries (FastAPI, Pydantic v2).
54
+
55
+ ## 3. Detailed Coding Standards
56
+
57
+ ### 3.1. Formatting & Linting (Ruff)
58
+
59
+ - All Python code **MUST** be formatted with `uv run ruff format`.
60
+ - All Python code **MUST** pass `uv run ruff check` without errors.
61
+ - Import statements should be grouped by standard library, third-party, and local modules and sorted alphabetically (PEP 8).
62
+
63
+ ### 3.2. Naming Conventions (PEP 8)
64
+
65
+ - Use `snake_case` for all variables, functions, methods, and module names.
66
+ - Use `PascalCase` for class names.
67
+ - Choose descriptive names. Avoid single-letter names except for loop counters or well-known initialisms.
68
+
69
+ ### 3.3. Type Hints (CRITICAL)
70
+
71
+ - Target Python 3.10+. Use modern type hint syntax.
72
+ - **DO:** Use `|` for unions (e.g., `str | None`).
73
+ - **DON'T:** Use `Optional` from `typing` (e.g., `Optional[str]`).
74
+ - **DO:** Use built-in generics (e.g., `list[int]`, `dict[str, float]`).
75
+ - **DON'T:** Use capitalized types from `typing` (e.g., `List[int]`, `Dict[str, float]`).
76
+ - All function and method signatures (arguments and return values) **MUST** have accurate type hints. If third party libraries made it impossible to fix type errors, suppress the type checker.
77
+
78
+ ### 3.4. Docstrings & Comments (CRITICAL)
79
+
80
+ - All public modules, functions, classes, and methods **MUST** have a docstring in English.
81
+ - Use the **Google Python Style** for docstrings.
82
+ - Docstrings **MUST** include:
83
+ 1. Summary.
84
+ 2. `Args:` section describing each parameter, its type, and its purpose.
85
+ 3. `Returns:` section describing the return value, its type, and its meaning.
86
+ 4. (Optional but encouraged) `Raises:` section for any exceptions thrown.
87
+ - All other code comments must also be in English.
88
+
89
+ ### 3.5. Logging
90
+
91
+ - Use the `loguru` module for all informational or error output.
92
+ - Log messages should be in English, clear, and informative. Use emoji when appropriate.
93
+
94
+ ## 4. Architectural Principles
95
+
96
+ ### 4.1. Dependency Management
97
+
98
+ - First, try to solve the problem using the Python standard library or existing project dependencies defined in `pyproject.toml`.
99
+ - If a new dependency is required, it must have a compatible license and be well-maintained.
100
+ - Use `uv add`, `uv remove`, `uv run` instead of pip to manage dependencies. If user uses conda, install uv with pip then.
101
+ - After adding a new dependency, in addition to `pyproject.toml`, you must add the dependency to `requirements.txt` as well.
102
+
103
+ ### 4.2. Cross-Platform Compatibility
104
+
105
+ - All core logic **MUST** run on macOS, Windows, and Linux.
106
+ - If a feature is platform-specific (e.g., uses a Windows-only API) or hardware-specific (e.g., CUDA), it **MUST** be an optional component. The application should start and run core features even if that component is not available. Use graceful fallbacks or clear error messages.
.github/workflows/codeql.yml ADDED
@@ -0,0 +1,92 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # For most projects, this workflow file will not need changing; you simply need
2
+ # to commit it to your repository.
3
+ #
4
+ # You may wish to alter this file to override the set of languages analyzed,
5
+ # or to provide custom queries or build logic.
6
+ #
7
+ # ******** NOTE ********
8
+ # We have attempted to detect the languages in your repository. Please check
9
+ # the `language` matrix defined below to confirm you have the correct set of
10
+ # supported CodeQL languages.
11
+ #
12
+ name: "CodeQL Advanced"
13
+
14
+ on:
15
+ push:
16
+ branches: [ "main" ]
17
+ pull_request:
18
+ branches: [ "main" ]
19
+ schedule:
20
+ - cron: '32 5 * * 6'
21
+
22
+ jobs:
23
+ analyze:
24
+ name: Analyze (${{ matrix.language }})
25
+ # Runner size impacts CodeQL analysis time. To learn more, please see:
26
+ # - https://gh.io/recommended-hardware-resources-for-running-codeql
27
+ # - https://gh.io/supported-runners-and-hardware-resources
28
+ # - https://gh.io/using-larger-runners (GitHub.com only)
29
+ # Consider using larger runners or machines with greater resources for possible analysis time improvements.
30
+ runs-on: ${{ (matrix.language == 'swift' && 'macos-latest') || 'ubuntu-latest' }}
31
+ permissions:
32
+ # required for all workflows
33
+ security-events: write
34
+
35
+ # required to fetch internal or private CodeQL packs
36
+ packages: read
37
+
38
+ # only required for workflows in private repositories
39
+ actions: read
40
+ contents: read
41
+
42
+ strategy:
43
+ fail-fast: false
44
+ matrix:
45
+ include:
46
+ - language: python
47
+ build-mode: none
48
+ # CodeQL supports the following values keywords for 'language': 'c-cpp', 'csharp', 'go', 'java-kotlin', 'javascript-typescript', 'python', 'ruby', 'swift'
49
+ # Use `c-cpp` to analyze code written in C, C++ or both
50
+ # Use 'java-kotlin' to analyze code written in Java, Kotlin or both
51
+ # Use 'javascript-typescript' to analyze code written in JavaScript, TypeScript or both
52
+ # To learn more about changing the languages that are analyzed or customizing the build mode for your analysis,
53
+ # see https://docs.github.com/en/code-security/code-scanning/creating-an-advanced-setup-for-code-scanning/customizing-your-advanced-setup-for-code-scanning.
54
+ # If you are analyzing a compiled language, you can modify the 'build-mode' for that language to customize how
55
+ # your codebase is analyzed, see https://docs.github.com/en/code-security/code-scanning/creating-an-advanced-setup-for-code-scanning/codeql-code-scanning-for-compiled-languages
56
+ steps:
57
+ - name: Checkout repository
58
+ uses: actions/checkout@v4
59
+
60
+ # Initializes the CodeQL tools for scanning.
61
+ - name: Initialize CodeQL
62
+ uses: github/codeql-action/init@v3
63
+ with:
64
+ languages: ${{ matrix.language }}
65
+ build-mode: ${{ matrix.build-mode }}
66
+ # If you wish to specify custom queries, you can do so here or in a config file.
67
+ # By default, queries listed here will override any specified in a config file.
68
+ # Prefix the list here with "+" to use these queries and those in the config file.
69
+
70
+ # For more details on CodeQL's query packs, refer to: https://docs.github.com/en/code-security/code-scanning/automatically-scanning-your-code-for-vulnerabilities-and-errors/configuring-code-scanning#using-queries-in-ql-packs
71
+ # queries: security-extended,security-and-quality
72
+
73
+ # If the analyze step fails for one of the languages you are analyzing with
74
+ # "We were unable to automatically build your code", modify the matrix above
75
+ # to set the build mode to "manual" for that language. Then modify this step
76
+ # to build your code.
77
+ # ℹ️ Command-line programs to run using the OS shell.
78
+ # 📚 See https://docs.github.com/en/actions/using-workflows/workflow-syntax-for-github-actions#jobsjob_idstepsrun
79
+ - if: matrix.build-mode == 'manual'
80
+ shell: bash
81
+ run: |
82
+ echo 'If you are using a "manual" build mode for one or more of the' \
83
+ 'languages you are analyzing, replace this with the commands to build' \
84
+ 'your code, for example:'
85
+ echo ' make bootstrap'
86
+ echo ' make release'
87
+ exit 1
88
+
89
+ - name: Perform CodeQL Analysis
90
+ uses: github/codeql-action/analyze@v3
91
+ with:
92
+ category: "/language:${{matrix.language}}"
.github/workflows/create_release.yml ADDED
@@ -0,0 +1,238 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ name: Create Release Packages
2
+
3
+ # Only manual trigger
4
+ on:
5
+ workflow_dispatch:
6
+ inputs:
7
+ version_override:
8
+ description: "Override version number in pyproject.toml (leave empty to use file version)"
9
+ required: false
10
+ upload_to_r2:
11
+ description: "Upload to Cloudflare R2"
12
+ type: boolean
13
+ default: true
14
+ create_github_release:
15
+ description: "Create GitHub Release"
16
+ type: boolean
17
+ default: true
18
+ target_branch:
19
+ description: "Branch to build (default is v1-release)"
20
+ required: false
21
+ default: "v1-release"
22
+
23
+ jobs:
24
+ build-release-packages:
25
+ runs-on: ubuntu-latest
26
+ steps:
27
+ - name: Clone repository
28
+ uses: actions/checkout@v3
29
+ with:
30
+ repository: Open-LLM-VTuber/Open-LLM-VTuber
31
+ ref: ${{ github.event.inputs.target_branch }}
32
+ submodules: true
33
+ fetch-depth: 1
34
+ fetch-tags: true
35
+ continue-on-error: true
36
+ id: checkout
37
+
38
+ - name: Try with default branch
39
+ if: steps.checkout.outcome == 'failure'
40
+ uses: actions/checkout@v3
41
+ with:
42
+ repository: Open-LLM-VTuber/Open-LLM-VTuber
43
+ ref: v1-release
44
+ submodules: true
45
+ fetch-depth: 1
46
+
47
+ # Add debug step to check file structure
48
+ - name: Debug - Check repository structure
49
+ run: |
50
+ echo "Current working directory: $(pwd)"
51
+ echo "List root directory contents:"
52
+ ls -la
53
+ echo "Check if config_templates directory exists:"
54
+ ls -la | grep config_templates || echo "config_templates directory does not exist"
55
+
56
+ echo "List config_templates directory contents:"
57
+ ls -la config_templates/
58
+
59
+ - name: Setup Python
60
+ uses: actions/setup-python@v4
61
+ with:
62
+ python-version: "3.10"
63
+
64
+ - name: Extract version from pyproject.toml
65
+ id: get_version
66
+ run: |
67
+ VERSION=$(grep -m 1 'version' pyproject.toml | sed 's/[^"]*"\([^"]*\).*/\1/')
68
+ if [ "${{ github.event.inputs.version_override }}" != "" ]; then
69
+ VERSION="${{ github.event.inputs.version_override }}"
70
+ fi
71
+ echo "VERSION=$VERSION" >> $GITHUB_ENV
72
+ echo "Found version: $VERSION"
73
+
74
+ # Download and prepare ASR model
75
+ - name: Download and prepare ASR model
76
+ run: |
77
+ mkdir -p models
78
+ cd models
79
+ echo "Downloading ASR model..."
80
+ wget https://github.com/k2-fsa/sherpa-onnx/releases/download/asr-models/sherpa-onnx-sense-voice-zh-en-ja-ko-yue-2024-07-17.tar.bz2
81
+ echo "Extracting model..."
82
+ tar -xjf sherpa-onnx-sense-voice-zh-en-ja-ko-yue-2024-07-17.tar.bz2
83
+ rm sherpa-onnx-sense-voice-zh-en-ja-ko-yue-2024-07-17.tar.bz2
84
+ echo "Removing model.onnx file to reduce size..."
85
+ rm -f sherpa-onnx-sense-voice-zh-en-ja-ko-yue-2024-07-17/model.onnx
86
+
87
+ # Clean unnecessary files
88
+ - name: Clean project
89
+ run: |
90
+ echo "Cleaning __pycache__ and .venv folders..."
91
+ find . -type d -name "__pycache__" -exec rm -rf {} + 2>/dev/null || true
92
+ find . -type d -name ".venv" -exec rm -rf {} + 2>/dev/null || true
93
+
94
+ # Create Chinese version
95
+ - name: Create Chinese version
96
+ run: |
97
+ echo "Creating Chinese version..."
98
+ cp config_templates/conf.ZH.default.yaml conf.yaml
99
+ zip -r Open-LLM-VTuber-v${{ env.VERSION }}-zh.zip . -x "*.zip"
100
+ rm conf.yaml
101
+
102
+ # Create English version
103
+ - name: Create English version
104
+ run: |
105
+ echo "Creating English version..."
106
+ cp config_templates/conf.default.yaml conf.yaml
107
+ zip -r Open-LLM-VTuber-v${{ env.VERSION }}-en.zip . -x "*.zip"
108
+ rm conf.yaml
109
+
110
+ # Get latest Electron app
111
+ - name: Get latest Electron app
112
+ id: download_electron
113
+ run: |
114
+ set -e
115
+
116
+ # Fetch the latest release JSON from the GitHub API
117
+ RELEASE_JSON=$(curl --silent "https://api.github.com/repos/Open-LLM-VTuber/Open-LLM-VTuber-Web/releases/latest")
118
+
119
+ # Use jq to extract browser_download_url for assets ending with .exe or .dmg
120
+ ASSET_URLS=$(echo "$RELEASE_JSON" | jq -r '.assets[] | select(.name | endswith(".exe") or endswith(".dmg")) | .browser_download_url')
121
+
122
+ # Download each asset into the current directory
123
+ for url in $ASSET_URLS; do
124
+ echo "Downloading $(basename "$url")..."
125
+ curl -L -O "$url"
126
+ ls -la
127
+ done
128
+
129
+ # If chosen, upload to GitHub Actions artifacts
130
+ - name: Upload Chinese version to GitHub Actions artifacts
131
+ if: ${{ github.event.inputs.create_github_release == 'true' }}
132
+ uses: actions/upload-artifact@v4
133
+ with:
134
+ name: Open-LLM-VTuber-v${{ env.VERSION }}-zh
135
+ path: Open-LLM-VTuber-v${{ env.VERSION }}-zh.zip
136
+ retention-days: 30
137
+
138
+ - name: Upload English version to GitHub Actions artifacts
139
+ if: ${{ github.event.inputs.create_github_release == 'true' }}
140
+ uses: actions/upload-artifact@v4
141
+ with:
142
+ name: Open-LLM-VTuber-v${{ env.VERSION }}-en
143
+ path: Open-LLM-VTuber-v${{ env.VERSION }}-en.zip
144
+ retention-days: 30
145
+
146
+ - name: Upload Windows installer to GitHub Actions artifacts
147
+ if: ${{ github.event.inputs.create_github_release == 'true' }}
148
+ uses: actions/upload-artifact@v4
149
+ with:
150
+ name: Open-LLM-VTuber-v${{ env.VERSION }}-windows
151
+ path: open-llm-vtuber-electron-*-setup.exe
152
+ retention-days: 30
153
+
154
+ - name: Upload macOS installer to GitHub Actions artifacts
155
+ if: ${{ github.event.inputs.create_github_release == 'true' }}
156
+ uses: actions/upload-artifact@v4
157
+ with:
158
+ name: Open-LLM-VTuber-v${{ env.VERSION }}-macos
159
+ path: open-llm-vtuber-electron-*.dmg
160
+ retention-days: 30
161
+
162
+ - name: Debug input parameters
163
+ run: |
164
+ echo "upload_to_r2 value: '${{ github.event.inputs.upload_to_r2 }}'"
165
+ echo "type: $(typeof ${{ github.event.inputs.upload_to_r2 }})"
166
+
167
+ # If chosen, upload to Cloudflare R2
168
+ - name: Upload to Cloudflare R2
169
+ if: ${{ github.event.inputs.upload_to_r2 == 'true' }}
170
+ env:
171
+ AWS_ACCESS_KEY_ID: ${{ secrets.R2_ACCESS_KEY_ID }}
172
+ AWS_SECRET_ACCESS_KEY: ${{ secrets.R2_SECRET_ACCESS_KEY }}
173
+ R2_ENDPOINT: ${{ secrets.R2_ENDPOINT }}
174
+ R2_PUBLIC_URL: ${{ secrets.R2_PUBLIC_URL }}
175
+ run: |
176
+ # Install AWS CLI
177
+ pip install awscli
178
+ echo "AWS CLI installation complete"
179
+ # Configure AWS CLI for Cloudflare R2
180
+ aws configure set aws_access_key_id "$AWS_ACCESS_KEY_ID"
181
+ aws configure set aws_secret_access_key "$AWS_SECRET_ACCESS_KEY"
182
+
183
+ # Confirm AWS CLI configuration
184
+ echo "AWS CLI configured, preparing to upload files..."
185
+
186
+ # Create version directory in bucket
187
+ aws s3 --endpoint-url=$R2_ENDPOINT cp --recursive --acl public-read . s3://open-llm-vtuber-release/v${{ env.VERSION }}/ --exclude "*" --include "Open-LLM-VTuber-v${{ env.VERSION }}-*.zip" --include "open-llm-vtuber-electron-*.dmg" --include "open-llm-vtuber-electron-*-setup.exe"
188
+
189
+ # Output public URLs
190
+ echo "Files uploaded to R2. Public URLs:"
191
+ for file in Open-LLM-VTuber-v${{ env.VERSION }}-zh.zip Open-LLM-VTuber-v${{ env.VERSION }}-en.zip open-llm-vtuber-electron-*.dmg open-llm-vtuber-electron-*-setup.exe; do
192
+ echo "$R2_PUBLIC_URL/v${{ env.VERSION }}/$file"
193
+ done
194
+
195
+ echo "R2 upload process completed"
196
+
197
+ # Generate download links markdown
198
+ - name: Generate R2 download links markdown
199
+ if: ${{ github.event.inputs.upload_to_r2 == 'true' }}
200
+ env:
201
+ R2_PUBLIC_URL: ${{ secrets.R2_PUBLIC_URL }}
202
+ run: |
203
+ # Get electron app version from filenames
204
+ EXE_VERSION=$(ls open-llm-vtuber-electron-*-setup.exe | sed -E 's/open-llm-vtuber-electron-(.*)-setup.exe/\1/')
205
+ DMG_VERSION=$(ls open-llm-vtuber-electron-*.dmg | sed -E 's/open-llm-vtuber-electron-(.*).dmg/\1/')
206
+
207
+ # Create markdown text with download links and save to file
208
+ cat > download-links.md << EOF
209
+
210
+ ## Faster download links for Chinese users 给内地用户准备的(相对)快速的下载链接
211
+ Open-LLM-VTuber-v${{ env.VERSION }}-zh.zip (包含 sherpa onnx asr 的 sense-voice 模型,就不用再从github上拉取了)
212
+ - [Open-LLM-VTuber-v${{ env.VERSION }}-en.zip]($R2_PUBLIC_URL/v${{ env.VERSION }}/Open-LLM-VTuber-v${{ env.VERSION }}-en.zip)
213
+ - [Open-LLM-VTuber-v${{ env.VERSION }}-zh.zip]($R2_PUBLIC_URL/v${{ env.VERSION }}/Open-LLM-VTuber-v${{ env.VERSION }}-zh.zip)
214
+
215
+ open-llm-vtuber-electron-$EXE_VERSION-frontend.exe (桌面版前端,Windows)
216
+ - [open-llm-vtuber-electron-$EXE_VERSION-setup.exe]($R2_PUBLIC_URL/v${{ env.VERSION }}/open-llm-vtuber-electron-$EXE_VERSION-setup.exe)
217
+
218
+ open-llm-vtuber-electron-$DMG_VERSION-frontend.dmg (桌面版前端,macOS)
219
+ - [open-llm-vtuber-electron-$DMG_VERSION.dmg]($R2_PUBLIC_URL/v${{ env.VERSION }}/open-llm-vtuber-electron-$DMG_VERSION.dmg)
220
+ EOF
221
+
222
+ echo "Download links markdown file created"
223
+
224
+ # Upload download links as an artifact
225
+ - name: Upload download links markdown
226
+ if: ${{ github.event.inputs.upload_to_r2 == 'true' }}
227
+ uses: actions/upload-artifact@v4
228
+ with:
229
+ name: download-links
230
+ path: download-links.md
231
+ retention-days: 30
232
+
233
+ # Add the download links to GitHub release if creating one
234
+ - name: Add download links to release description
235
+ if: ${{ github.event.inputs.upload_to_r2 == 'true' && github.event.inputs.create_github_release == 'true' }}
236
+ run: |
237
+ echo "::set-output name=download_links::$(cat download-links.md)"
238
+ id: download_links
.github/workflows/docker-blacksmith.yml ADDED
@@ -0,0 +1,207 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ name: Docker Build & Push (Blacksmith)
2
+
3
+ on:
4
+ push:
5
+ branches:
6
+ - main
7
+ tags:
8
+ - "v*"
9
+ - "*.*.*"
10
+ pull_request:
11
+ branches:
12
+ - main
13
+ workflow_dispatch:
14
+
15
+ concurrency:
16
+ group: docker-blacksmith-${{ github.ref }}
17
+ cancel-in-progress: true
18
+
19
+ permissions:
20
+ contents: read
21
+ packages: write
22
+
23
+ env:
24
+ DOCKERFILE: dockerfile
25
+ CONTEXT: .
26
+ DOCKERHUB_IMAGE: ${{ vars.DOCKERHUB_IMAGE || 'openllmvtuber/open-llm-vtuber' }}
27
+ GHCR_IMAGE: ${{ vars.GHCR_IMAGE || '' }}
28
+
29
+ jobs:
30
+ meta:
31
+ runs-on: blacksmith-8vcpu-ubuntu-2204
32
+ outputs:
33
+ tags: ${{ steps.meta.outputs.tags }}
34
+ labels: ${{ steps.meta.outputs.labels }}
35
+ dockerhub_image: ${{ steps.image.outputs.dockerhub_image }}
36
+ ghcr_image: ${{ steps.image.outputs.ghcr_image }}
37
+ steps:
38
+ - name: Resolve image names
39
+ id: image
40
+ shell: bash
41
+ run: |
42
+ set -euo pipefail
43
+ dockerhub_image="${DOCKERHUB_IMAGE}"
44
+ if [ -n "${GHCR_IMAGE:-}" ]; then
45
+ ghcr_image="${GHCR_IMAGE}"
46
+ else
47
+ ghcr_image="ghcr.io/${GITHUB_REPOSITORY,,}"
48
+ fi
49
+ echo "dockerhub_image=${dockerhub_image}" >> "$GITHUB_OUTPUT"
50
+ echo "ghcr_image=${ghcr_image}" >> "$GITHUB_OUTPUT"
51
+
52
+ - name: Docker image metadata
53
+ id: meta
54
+ uses: docker/metadata-action@v5
55
+ with:
56
+ images: |
57
+ ${{ steps.image.outputs.dockerhub_image }}
58
+ ${{ steps.image.outputs.ghcr_image }}
59
+ tags: |
60
+ type=ref,event=branch
61
+ type=ref,event=tag
62
+ type=semver,pattern={{version}}
63
+ type=semver,pattern={{major}}.{{minor}}
64
+ type=sha,format=short
65
+ type=raw,value=latest,enable={{is_default_branch}}
66
+
67
+ build:
68
+ needs: meta
69
+ runs-on: ${{ matrix.runner }}
70
+ strategy:
71
+ fail-fast: false
72
+ matrix:
73
+ include:
74
+ - platform: amd64
75
+ runner: blacksmith-8vcpu-ubuntu-2204
76
+ docker_platform: linux/amd64
77
+ - platform: arm64
78
+ runner: blacksmith-8vcpu-ubuntu-2204-arm
79
+ docker_platform: linux/arm64
80
+ steps:
81
+ - name: Checkout
82
+ uses: actions/checkout@v4
83
+
84
+ - name: Set up Blacksmith Docker builder
85
+ uses: useblacksmith/setup-docker-builder@v1
86
+
87
+ - name: Login to Docker Hub (retry)
88
+ if: github.event_name != 'pull_request'
89
+ shell: bash
90
+ env:
91
+ DOCKERHUB_USERNAME: ${{ secrets.DOCKERHUB_USERNAME }}
92
+ DOCKERHUB_TOKEN: ${{ secrets.DOCKERHUB_TOKEN }}
93
+ run: |
94
+ set -euo pipefail
95
+ for attempt in 1 2 3 4; do
96
+ if echo "${DOCKERHUB_TOKEN}" | docker login -u "${DOCKERHUB_USERNAME}" --password-stdin; then
97
+ exit 0
98
+ fi
99
+ if [ "${attempt}" -eq 4 ]; then
100
+ echo "Docker Hub login failed after ${attempt} attempts." >&2
101
+ exit 1
102
+ fi
103
+ sleep $((attempt * 5))
104
+ done
105
+
106
+ - name: Login to GHCR (retry)
107
+ if: github.event_name != 'pull_request'
108
+ shell: bash
109
+ env:
110
+ GHCR_USER: ${{ github.actor }}
111
+ GHCR_TOKEN: ${{ secrets.GITHUB_TOKEN }}
112
+ run: |
113
+ set -euo pipefail
114
+ for attempt in 1 2 3 4; do
115
+ if echo "${GHCR_TOKEN}" | docker login ghcr.io -u "${GHCR_USER}" --password-stdin; then
116
+ exit 0
117
+ fi
118
+ if [ "${attempt}" -eq 4 ]; then
119
+ echo "GHCR login failed after ${attempt} attempts." >&2
120
+ exit 1
121
+ fi
122
+ sleep $((attempt * 5))
123
+ done
124
+
125
+ - name: Prepare temporary arch tags
126
+ id: temp-tags
127
+ shell: bash
128
+ run: |
129
+ set -euo pipefail
130
+ ref_slug="$(echo "${GITHUB_REF_NAME}" | tr '[:upper:]' '[:lower:]' | sed 's/[^a-z0-9_.-]/-/g')"
131
+ suffix="tmp-${ref_slug}-${{ matrix.platform }}"
132
+ {
133
+ echo "tags<<EOF"
134
+ echo "${{ needs.meta.outputs.dockerhub_image }}:${suffix}"
135
+ echo "${{ needs.meta.outputs.ghcr_image }}:${suffix}"
136
+ echo "EOF"
137
+ } >> "$GITHUB_OUTPUT"
138
+
139
+ - name: Build and push (${{ matrix.platform }})
140
+ uses: useblacksmith/build-push-action@v2
141
+ with:
142
+ context: ${{ env.CONTEXT }}
143
+ file: ${{ env.DOCKERFILE }}
144
+ platforms: ${{ matrix.docker_platform }}
145
+ push: ${{ github.event_name != 'pull_request' }}
146
+ tags: ${{ steps.temp-tags.outputs.tags }}
147
+ labels: ${{ needs.meta.outputs.labels }}
148
+
149
+ manifest:
150
+ needs: [meta, build]
151
+ runs-on: ubuntu-latest
152
+ if: github.event_name != 'pull_request'
153
+ steps:
154
+ - name: Login to Docker Hub (retry)
155
+ shell: bash
156
+ env:
157
+ DOCKERHUB_USERNAME: ${{ secrets.DOCKERHUB_USERNAME }}
158
+ DOCKERHUB_TOKEN: ${{ secrets.DOCKERHUB_TOKEN }}
159
+ run: |
160
+ set -euo pipefail
161
+ for attempt in 1 2 3 4; do
162
+ if echo "${DOCKERHUB_TOKEN}" | docker login -u "${DOCKERHUB_USERNAME}" --password-stdin; then
163
+ exit 0
164
+ fi
165
+ if [ "${attempt}" -eq 4 ]; then
166
+ echo "Docker Hub login failed after ${attempt} attempts." >&2
167
+ exit 1
168
+ fi
169
+ sleep $((attempt * 5))
170
+ done
171
+
172
+ - name: Login to GHCR (retry)
173
+ shell: bash
174
+ env:
175
+ GHCR_USER: ${{ github.actor }}
176
+ GHCR_TOKEN: ${{ secrets.GITHUB_TOKEN }}
177
+ run: |
178
+ set -euo pipefail
179
+ for attempt in 1 2 3 4; do
180
+ if echo "${GHCR_TOKEN}" | docker login ghcr.io -u "${GHCR_USER}" --password-stdin; then
181
+ exit 0
182
+ fi
183
+ if [ "${attempt}" -eq 4 ]; then
184
+ echo "GHCR login failed after ${attempt} attempts." >&2
185
+ exit 1
186
+ fi
187
+ sleep $((attempt * 5))
188
+ done
189
+
190
+ - name: Set up Buildx
191
+ uses: docker/setup-buildx-action@v3
192
+
193
+ - name: Create and push multi-arch manifests
194
+ shell: bash
195
+ run: |
196
+ set -euo pipefail
197
+ ref_slug="$(echo "${GITHUB_REF_NAME}" | tr '[:upper:]' '[:lower:]' | sed 's/[^a-z0-9_.-]/-/g')"
198
+ suffix_base="tmp-${ref_slug}"
199
+ mapfile -t tags <<< "${{ needs.meta.outputs.tags }}"
200
+ for tag in "${tags[@]}"; do
201
+ [ -z "$tag" ] && continue
202
+ base="${tag%:*}"
203
+ docker buildx imagetools create \
204
+ --tag "$tag" \
205
+ "${base}:${suffix_base}-amd64" \
206
+ "${base}:${suffix_base}-arm64"
207
+ done
.github/workflows/fossa_scan.yml ADDED
@@ -0,0 +1,16 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ name: Fossa Scan
2
+
3
+ on:
4
+ push:
5
+ branches:
6
+ - main
7
+ workflow_dispatch:
8
+
9
+ jobs:
10
+ fossa-scan:
11
+ runs-on: ubuntu-latest
12
+ steps:
13
+ - uses: actions/checkout@v3
14
+ - uses: fossas/fossa-action@main
15
+ with:
16
+ api-key: ${{ secrets.fossaApiKey }}
.github/workflows/ruff.yml ADDED
@@ -0,0 +1,8 @@
 
 
 
 
 
 
 
 
 
1
+ name: Ruff
2
+ on: [ push, pull_request ]
3
+ jobs:
4
+ ruff:
5
+ runs-on: ubuntu-latest
6
+ steps:
7
+ - uses: actions/checkout@v4
8
+ - uses: astral-sh/ruff-action@v3
.github/workflows/update-requirements.yml ADDED
@@ -0,0 +1,30 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ name: Sync Requirements
2
+
3
+ on:
4
+ push:
5
+ paths:
6
+ - pyproject.toml
7
+
8
+ jobs:
9
+ regenerate:
10
+ runs-on: ubuntu-latest
11
+ permissions:
12
+ contents: write
13
+ steps:
14
+ - name: Check out repository
15
+ uses: actions/checkout@v4
16
+ with:
17
+ fetch-depth: 0
18
+ - name: Set up uv
19
+ uses: astral-sh/setup-uv@v3
20
+ - name: Compile default requirements
21
+ run: uv pip compile pyproject.toml -o requirements.txt --no-deps --universal
22
+ - name: Compile bilibili requirements
23
+ run: uv pip compile pyproject.toml --extra bilibili -o requirements-bilibili.txt --no-deps --universal --no-annotate --no-header
24
+ - name: Commit updated requirements
25
+ uses: stefanzweifel/git-auto-commit-action@v5
26
+ with:
27
+ commit_message: "chore: update requirements files (bot)"
28
+ file_pattern: |
29
+ requirements.txt
30
+ requirements-bilibili.txt
.gitignore ADDED
@@ -0,0 +1,78 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # ignore the user configuration file
2
+ conf.yaml
3
+
4
+ # ignore non-default background images
5
+ /backgrounds/*
6
+ !/backgrounds/README.md
7
+
8
+ # ignore non-default live2d models
9
+ /live2d-models/*
10
+ !/live2d-models/mao_pro
11
+ !/live2d-models/shizuku
12
+
13
+ # ignore non-default character configs
14
+ /characters/*
15
+ !/characters/README.md
16
+ # but all of the defaults are already tracked, so no need to un-ignore them
17
+
18
+ # ignore avatars
19
+ avatars/*
20
+
21
+ # macOS system files
22
+ .DS_Store
23
+
24
+ # Python files
25
+ __pycache__/
26
+ *.pyc
27
+ lab.py
28
+ .idea
29
+
30
+ # Virtual environment
31
+ .venv
32
+ .conda
33
+ conda
34
+
35
+ # Sensitive data
36
+ .env
37
+ api_keys.py
38
+ src/open_llm_vtuber/llm/user_credentials.json
39
+
40
+ # Database files
41
+ memory.db
42
+ memory.db.bk
43
+
44
+ # Logs
45
+ server.log
46
+ logs/*
47
+
48
+ # Cache and models
49
+ cache/*
50
+ src/open_llm_vtuber/tts/asset
51
+ src/open_llm_vtuber/tts/config
52
+ src/open_llm_vtuber/asr/models*
53
+ !src/open_llm_vtuber/asr/models/silero_vad.onnx
54
+ asset
55
+ models/*
56
+ !models/piper_voice
57
+ models/piper_voice/*
58
+
59
+
60
+ # Legacy and specific directories
61
+ legacy/
62
+ chat_history/
63
+ knowledge_base/
64
+ submodules/MeloTTS
65
+ openapi_assistants.json
66
+ openapi_memgpt.json
67
+
68
+ # memory log
69
+ mem.json
70
+ conf.yaml.backup
71
+
72
+ *.exe
73
+
74
+ tmp/
75
+ private/
76
+ mcp_servers.json
77
+
78
+ .vscode/
.gitmodules ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ [submodule "frontend"]
2
+ path = frontend
3
+ url = https://github.com/Open-LLM-VTuber/Open-LLM-VTuber-Web
4
+ branch = build
.pre-commit-config.yaml ADDED
@@ -0,0 +1,9 @@
 
 
 
 
 
 
 
 
 
 
1
+ repos:
2
+ - repo: https://github.com/astral-sh/ruff-pre-commit
3
+ rev: v0.9.6
4
+ hooks:
5
+ - id: ruff
6
+ args: [--fix, --exit-non-zero-on-fix]
7
+ - id: ruff-format
8
+
9
+
.python-version ADDED
@@ -0,0 +1 @@
 
 
1
+ 3.10
CLAUDE.md ADDED
@@ -0,0 +1,156 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # CLAUDE.md
2
+
3
+ This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository.
4
+
5
+ ## Project Overview
6
+
7
+ Open-LLM-VTuber is a voice-interactive AI companion with Live2D avatar support that runs completely offline. It's a cross-platform Python application supporting real-time voice conversations, visual perception, and Live2D character animations. The project features modular architecture for LLM, ASR (Automatic Speech Recognition), TTS (Text-to-Speech), and other components.
8
+
9
+ ## Essential Commands
10
+
11
+ ### Development Setup
12
+ - **Install dependencies**: `uv sync` (uses uv package manager)
13
+ - **Run server**: `uv run run_server.py`
14
+ - **Run with verbose logging**: `uv run run_server.py --verbose`
15
+ - **Update project**: `uv run upgrade.py`
16
+
17
+ ### Code Quality
18
+ - **Lint code**: `ruff check .`
19
+ - **Format code**: `ruff format .`
20
+ - **Run pre-commit hooks**: `pre-commit run --all-files`
21
+
22
+ ### Server Configuration
23
+ - **Main config file**: `conf.yaml` (user configuration)
24
+ - **Default configs**: `config_templates/conf.default.yaml` and `config_templates/conf.ZH.default.yaml`
25
+ - **Character configs**: `characters/` directory (YAML files)
26
+
27
+ ## Architecture Overview
28
+
29
+ ### Core Components
30
+
31
+ **WebSocket Server** (`src/open_llm_vtuber/server.py`):
32
+ - FastAPI-based server handling WebSocket connections
33
+ - Serves frontend, Live2D models, and static assets
34
+ - Supports both main client and proxy WebSocket endpoints
35
+
36
+ **Service Context** (`src/open_llm_vtuber/service_context.py`):
37
+ - Central dependency injection container
38
+ - Manages all engines (LLM, ASR, TTS, VAD, etc.)
39
+ - Each WebSocket connection gets its own service context instance
40
+
41
+ **WebSocket Handler** (`src/open_llm_vtuber/websocket_handler.py`):
42
+ - Routes WebSocket messages to appropriate handlers
43
+ - Manages client connections, groups, and conversation state
44
+ - Handles audio data, conversation triggers, and Live2D interactions
45
+
46
+ ### Modular Engine System
47
+
48
+ The project uses a factory pattern for all AI engines:
49
+
50
+ **Agent System** (`src/open_llm_vtuber/agent/`):
51
+ - `agent_factory.py` - Factory for creating different agent types
52
+ - `agents/` - Various agent implementations (basic_memory, hume_ai, letta, mem0)
53
+ - `stateless_llm/` - Stateless LLM implementations (Claude, OpenAI, Ollama, etc.)
54
+
55
+ **ASR Engines** (`src/open_llm_vtuber/asr/`):
56
+ - Support for multiple ASR backends: Sherpa-ONNX, FunASR, Faster-Whisper, OpenAI Whisper, etc.
57
+ - Factory pattern for engine selection based on configuration
58
+
59
+ **TTS Engines** (`src/open_llm_vtuber/tts/`):
60
+ - Multiple TTS options: Azure TTS, Edge TTS, MeloTTS, CosyVoice, GPT-SoVITS, etc.
61
+ - Configurable voice cloning and multi-language support
62
+
63
+ **VAD (Voice Activity Detection)** (`src/open_llm_vtuber/vad/`):
64
+ - Silero VAD for detecting speech activity
65
+ - Essential for voice interruption without feedback loops
66
+
67
+ ### Configuration Management
68
+
69
+ **Config System** (`src/open_llm_vtuber/config_manager/`):
70
+ - Type-safe configuration classes for each component
71
+ - Automatic validation and loading from YAML files
72
+ - Support for multiple character configurations and config switching
73
+
74
+ ### Conversation System
75
+
76
+ **Conversation Handling** (`src/open_llm_vtuber/conversations/`):
77
+ - `conversation_handler.py` - Main conversation orchestration
78
+ - `single_conversation.py` - Individual user conversations
79
+ - `group_conversation.py` - Multi-user group conversations
80
+ - `tts_manager.py` - Audio streaming and TTS management
81
+
82
+ ### MCP (Model Context Protocol) Integration
83
+
84
+ **MCP System** (`src/open_llm_vtuber/mcpp/`):
85
+ - Tool execution and server registry
86
+ - JSON detection and parameter extraction
87
+ - Integration with various MCP servers for extended functionality
88
+
89
+ ## Key Development Patterns
90
+
91
+ ### Error Handling
92
+ The codebase uses the missing `_cleanup_failed_connection` method pattern - when implementing new WebSocket handlers, ensure proper cleanup methods are implemented.
93
+
94
+ ### Live2D Integration
95
+ - Models stored in `live2d-models/` directory
96
+ - Each model has its own `.model3.json` configuration
97
+ - Expression and motion control through WebSocket messages
98
+
99
+ ### Audio Processing
100
+ - Real-time audio streaming through WebSocket
101
+ - Voice interruption support without headphones
102
+ - Multi-format audio support with proper codec handling
103
+
104
+ ### Multi-language Support
105
+ - Character configurations support multiple languages
106
+ - TTS translation capabilities (speak in different language than input)
107
+ - I18n system for UI elements
108
+
109
+ ## Important File Locations
110
+
111
+ - **Entry point**: `run_server.py`
112
+ - **Main server**: `src/open_llm_vtuber/server.py`
113
+ - **WebSocket routing**: `src/open_llm_vtuber/routes.py`
114
+ - **Configuration**: `conf.yaml` (user), `config_templates/` (defaults)
115
+ - **Frontend**: `frontend/` (Git submodule)
116
+ - **Live2D models**: `live2d-models/`
117
+ - **Character definitions**: `characters/`
118
+ - **Chat history**: `chat_history/`
119
+ - **Cache**: `cache/` (audio files, temporary data)
120
+
121
+ ## Development Guidelines
122
+
123
+ ### Adding New Engines
124
+ 1. Create interface in appropriate directory (e.g., `asr_interface.py`)
125
+ 2. Implement concrete class following existing patterns
126
+ 3. Add to factory class (e.g., `asr_factory.py`)
127
+ 4. Update configuration classes in `config_manager/`
128
+ 5. Add configuration options to default YAML files
129
+
130
+ ### WebSocket Message Handling
131
+ 1. Add message type to `MessageType` enum in `websocket_handler.py`
132
+ 2. Create handler method following `_handle_*` pattern
133
+ 3. Register in `_init_message_handlers()` dictionary
134
+ 4. Ensure proper error handling and client response
135
+
136
+ ### Configuration Changes
137
+ - Always update both default config templates
138
+ - Maintain backward compatibility when possible
139
+ - Use the upgrade system for breaking changes
140
+ - Validate configurations in respective config manager classes
141
+
142
+ ## Testing and Quality Assurance
143
+
144
+ The project uses:
145
+ - **Ruff** for linting and formatting (configured in `pyproject.toml`)
146
+ - **Pre-commit hooks** for automated quality checks
147
+ - **GitHub Actions** for CI/CD (`.github/workflows/`)
148
+ - Manual testing through web interface and desktop client
149
+
150
+ ## Package Management
151
+
152
+ Uses **uv** (modern Python package manager):
153
+ - Dependencies defined in `pyproject.toml`
154
+ - Lock file: `uv.lock`
155
+ - Generated requirements: `requirements.txt` (auto-generated)
156
+ - Optional dependencies for specific features (e.g., `bilibili` extra)
CONTRIBUTING.md ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+
2
+ Please read the [Development - Overview](https://open-llm-vtuber.github.io/docs/development-guide/overview) before contributing.
3
+
4
+ If the site is down (like after a thousand years), refer to the [source repo of our documentation site](https://github.com/Open-LLM-VTuber/open-llm-vtuber.github.io/blob/main/docs/development-guide/overview.md)
Dockerfile ADDED
@@ -0,0 +1,81 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ FROM python:3.10-slim
2
+
3
+ ENV DEBIAN_FRONTEND=noninteractive \
4
+ PYTHONDONTWRITEBYTECODE=1 \
5
+ PYTHONUNBUFFERED=1 \
6
+ PIP_NO_CACHE_DIR=1 \
7
+ UV_LINK_MODE=copy \
8
+ CONFIG_FILE=/app/conf/conf.yaml
9
+
10
+ WORKDIR /app
11
+
12
+ # Base dependencies
13
+ RUN apt-get update -o Acquire::Retries=5 \
14
+ && apt-get install -y --no-install-recommends \
15
+ ffmpeg git curl ca-certificates \
16
+ && rm -rf /var/lib/apt/lists/*
17
+
18
+ # Install uv
19
+ COPY --from=ghcr.io/astral-sh/uv:latest /uv /uvx /usr/local/bin/
20
+
21
+ # Install deps (cache-friendly)
22
+ COPY pyproject.toml uv.lock ./
23
+ RUN --mount=type=cache,target=/root/.cache/uv \
24
+ uv sync --frozen --no-dev
25
+
26
+ # Copy source & install project
27
+ COPY . /app
28
+ RUN uv pip install --no-deps .
29
+
30
+ # Startup script
31
+ RUN printf '%s\n' \
32
+ '#!/usr/bin/env sh' \
33
+ 'set -eu' \
34
+ '' \
35
+ 'mkdir -p /app/conf /app/models' \
36
+ '' \
37
+ '# 1) conf.yaml (required)' \
38
+ 'if [ -f "/app/conf/conf.yaml" ]; then' \
39
+ ' echo "Using user-provided conf.yaml"' \
40
+ ' ln -sf /app/conf/conf.yaml /app/conf.yaml' \
41
+ 'else' \
42
+ ' echo "ERROR: conf.yaml is required."' \
43
+ ' echo "Please mount your config dir to /app/conf"' \
44
+ ' exit 1' \
45
+ 'fi' \
46
+ '' \
47
+ '# 2) model_dict.json (optional)' \
48
+ 'if [ -f "/app/conf/model_dict.json" ]; then' \
49
+ ' ln -sf /app/conf/model_dict.json /app/model_dict.json' \
50
+ 'fi' \
51
+ '' \
52
+ '# 3) live2d-models' \
53
+ 'if [ -d "/app/conf/live2d-models" ]; then' \
54
+ ' rm -rf /app/live2d-models && ln -s /app/conf/live2d-models /app/live2d-models' \
55
+ 'fi' \
56
+ '' \
57
+ '# 4) characters' \
58
+ 'if [ -d "/app/conf/characters" ]; then' \
59
+ ' rm -rf /app/characters && ln -s /app/conf/characters /app/characters' \
60
+ 'fi' \
61
+ '' \
62
+ '# 5) avatars' \
63
+ 'if [ -d "/app/conf/avatars" ]; then' \
64
+ ' rm -rf /app/avatars && ln -s /app/conf/avatars /app/avatars' \
65
+ 'fi' \
66
+ '' \
67
+ '# 6) backgrounds' \
68
+ 'if [ -d "/app/conf/backgrounds" ]; then' \
69
+ ' rm -rf /app/backgrounds && ln -s /app/conf/backgrounds /app/backgrounds' \
70
+ 'fi' \
71
+ '' \
72
+ '# 7) start app' \
73
+ 'exec uv run run_server.py' \
74
+ > /usr/local/bin/start-app && chmod +x /usr/local/bin/start-app
75
+
76
+ # Volumes
77
+ VOLUME ["/app/conf", "/app/models"]
78
+
79
+ EXPOSE 12393
80
+
81
+ CMD ["/usr/local/bin/start-app"]
README.md CHANGED
@@ -1,10 +1,158 @@
1
- ---
2
- title: Open LLM
3
- emoji: 📈
4
- colorFrom: yellow
5
- colorTo: green
6
- sdk: docker
7
- pinned: false
8
- ---
9
-
10
- Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ![](./assets/banner.jpg)
2
+
3
+ <h1 align="center">Open-LLM-VTuber</h1>
4
+ <h3 align="center">
5
+
6
+ [![GitHub release](https://img.shields.io/github/v/release/Open-LLM-VTuber/Open-LLM-VTuber)](https://github.com/Open-LLM-VTuber/Open-LLM-VTuber/releases)
7
+ [![license](https://img.shields.io/github/license/Open-LLM-VTuber/Open-LLM-VTuber)](https://github.com/Open-LLM-VTuber/Open-LLM-VTuber/blob/master/LICENSE)
8
+ [![CodeQL](https://github.com/Open-LLM-VTuber/Open-LLM-VTuber/actions/workflows/codeql.yml/badge.svg)](https://github.com/Open-LLM-VTuber/Open-LLM-VTuber/actions/workflows/codeql.yml)
9
+ [![Ruff](https://github.com/Open-LLM-VTuber/Open-LLM-VTuber/actions/workflows/ruff.yml/badge.svg)](https://github.com/Open-LLM-VTuber/Open-LLM-VTuber/actions/workflows/ruff.yml)
10
+ [![Docker](https://img.shields.io/badge/Open-LLM-VTuber%2FOpen--LLM--VTuber-%25230db7ed.svg?logo=docker&logoColor=blue&labelColor=white&color=blue)](https://hub.docker.com/r/Open-LLM-VTuber/open-llm-vtuber)
11
+ [![QQ User Group](https://img.shields.io/badge/QQ_User_Group-792615362-white?style=flat&logo=qq&logoColor=white)](https://qm.qq.com/q/ngvNUQpuKI)
12
+ [![Static Badge](https://img.shields.io/badge/Join%20Chat-Zulip?style=flat&logo=zulip&label=Zulip(dev-community)&color=blue&link=https%3A%2F%2Folv.zulipchat.com)](https://olv.zulipchat.com)
13
+
14
+ > **📢 v2.0 Development**: We are focusing on Open-LLM-VTuber v2.0 — a complete rewrite of the codebase. v2.0 is currently in its early discussion and planning phase. We kindly ask you to refrain from opening new issues or pull requests for feature requests on v1. To participate in the v2 discussions or contribute, join our developer community on [Zulip](https://olv.zulipchat.com). Weekly meeting schedules will be announced on Zulip. We will continue fixing bugs for v1 and work through existing pull requests.
15
+
16
+ [![BuyMeACoffee](https://img.shields.io/badge/Buy%20Me%20a%20Coffee-ffdd00?style=for-the-badge&logo=buy-me-a-coffee&logoColor=black)](https://www.buymeacoffee.com/yi.ting)
17
+ [![](https://dcbadge.limes.pink/api/server/3UDA8YFDXx)](https://discord.gg/3UDA8YFDXx)
18
+
19
+ [![Ask DeepWiki](https://deepwiki.com/badge.svg)](https://deepwiki.com/Open-LLM-VTuber/Open-LLM-VTuber)
20
+
21
+ ENGLISH README | [中文 README](./README.CN.md) | [한국어 README](./README.KR.md) | [日本語 README](./README.JP.md)
22
+
23
+ [Documentation](https://open-llm-vtuber.github.io/docs/quick-start) | [![Roadmap](https://img.shields.io/badge/Roadmap-GitHub_Project-yellow)](https://github.com/orgs/Open-LLM-VTuber/projects/2)
24
+
25
+ <a href="https://trendshift.io/repositories/12358" target="_blank"><img src="https://trendshift.io/api/badge/repositories/12358" alt="Open-LLM-VTuber%2FOpen-LLM-VTuber | Trendshift" style="width: 250px; height: 55px;" width="250" height="55"/></a>
26
+
27
+ </h3>
28
+
29
+
30
+ > 常见问题 Common Issues doc (Written in Chinese): https://docs.qq.com/pdf/DTFZGQXdTUXhIYWRq
31
+ >
32
+ > User Survey: https://forms.gle/w6Y6PiHTZr1nzbtWA
33
+ >
34
+ > 调查问卷(中文): https://wj.qq.com/s2/16150415/f50a/
35
+
36
+
37
+
38
+ > :warning: This project is in its early stages and is currently under **active development**.
39
+
40
+ > :warning: If you want to run the server remotely and access it on a different machine, such as running the server on your computer and access it on your phone, you will need to configure `https`, because the microphone on the front end will only launch in a secure context (a.k.a. https or localhost). See [MDN Web Doc](https://developer.mozilla.org/en-US/docs/Web/API/MediaDevices/getUserMedia). Therefore, you should configure https with a reverse proxy to access the page on a remote machine (non-localhost).
41
+
42
+
43
+
44
+ ## ⭐️ What is this project?
45
+
46
+
47
+ **Open-LLM-VTuber** is a unique **voice-interactive AI companion** that not only supports **real-time voice conversations** and **visual perception** but also features a lively **Live2D avatar**. All functionalities can run completely offline on your computer!
48
+
49
+ You can treat it as your personal AI companion — whether you want a `virtual girlfriend`, `boyfriend`, `cute pet`, or any other character, it can meet your expectations. The project fully supports `Windows`, `macOS`, and `Linux`, and offers two usage modes: web version and desktop client (with special support for **transparent background desktop pet mode**, allowing the AI companion to accompany you anywhere on your screen).
50
+
51
+ Although the long-term memory feature is temporarily removed (coming back soon), thanks to the persistent storage of chat logs, you can always continue your previous unfinished conversations without losing any precious interactive moments.
52
+
53
+ In terms of backend support, we have integrated a rich variety of LLM inference, text-to-speech, and speech recognition solutions. If you want to customize your AI companion, you can refer to the [Character Customization Guide](https://open-llm-vtuber.github.io/docs/user-guide/live2d) to customize your AI companion's appearance and persona.
54
+
55
+ The reason it's called `Open-LLM-Vtuber` instead of `Open-LLM-Companion` or `Open-LLM-Waifu` is because the project's initial development goal was to use open-source solutions that can run offline on platforms other than Windows to recreate the closed-source AI Vtuber `neuro-sama`.
56
+
57
+ ### 👀 Demo
58
+ | ![](assets/i1.jpg) | ![](assets/i2.jpg) |
59
+ |:---:|:---:|
60
+ | ![](assets/i3.jpg) | ![](assets/i4.jpg) |
61
+
62
+
63
+ ## ✨ Features & Highlights
64
+
65
+ - 🖥️ **Cross-platform support**: Perfect compatibility with macOS, Linux, and Windows. We support NVIDIA and non-NVIDIA GPUs, with options to run on CPU or use cloud APIs for resource-intensive tasks. Some components support GPU acceleration on macOS.
66
+
67
+ - 🔒 **Offline mode support**: Run completely offline using local models - no internet required. Your conversations stay on your device, ensuring privacy and security.
68
+
69
+ - 💻 **Attractive and powerful web and desktop clients**: Offers both web version and desktop client usage modes, supporting rich interactive features and personalization settings. The desktop client can switch freely between window mode and desktop pet mode, allowing the AI companion to be by your side at all times.
70
+
71
+ - 🎯 **Advanced interaction features**:
72
+ - 👁️ Visual perception, supporting camera, screen recording and screenshots, allowing your AI companion to see you and your screen
73
+ - 🎤 Voice interruption without headphones (AI won't hear its own voice)
74
+ - 🫱 Touch feedback, interact with your AI companion through clicks or drags
75
+ - 😊 Live2D expressions, set emotion mapping to control model expressions from the backend
76
+ - 🐱 Pet mode, supporting transparent background, global top-most, and mouse click-through - drag your AI companion anywhere on the screen
77
+ - 💭 Display AI's inner thoughts, allowing you to see AI's expressions, thoughts and actions without them being spoken
78
+ - 🗣️ AI proactive speaking feature
79
+ - 💾 Chat log persistence, switch to previous conversations anytime
80
+ - 🌍 TTS translation support (e.g., chat in Chinese while AI uses Japanese voice)
81
+
82
+ - 🧠 **Extensive model support**:
83
+ - 🤖 Large Language Models (LLM): Ollama, OpenAI (and any OpenAI-compatible API), Gemini, Claude, Mistral, DeepSeek, Zhipu AI, GGUF, LM Studio, vLLM, etc.
84
+ - 🎙️ Automatic Speech Recognition (ASR): sherpa-onnx, FunASR, Faster-Whisper, Whisper.cpp, Whisper, Groq Whisper, Azure ASR, etc.
85
+ - 🔊 Text-to-Speech (TTS): sherpa-onnx, pyttsx3, MeloTTS, Coqui-TTS, GPTSoVITS, Bark, CosyVoice, Edge TTS, Fish Audio, Azure TTS, etc.
86
+
87
+ - 🔧 **Highly customizable**:
88
+ - ⚙️ **Simple module configuration**: Switch various functional modules through simple configuration file modifications, without delving into the code
89
+ - 🎨 **Character customization**: Import custom Live2D models to give your AI companion a unique appearance. Shape your AI companion's persona by modifying the Prompt. Perform voice cloning to give your AI companion the voice you desire
90
+ - 🧩 **Flexible Agent implementation**: Inherit and implement the Agent interface to integrate any Agent architecture, such as HumeAI EVI, OpenAI Her, Mem0, etc.
91
+ - 🔌 **Good extensibility**: Modular design allows you to easily add your own LLM, ASR, TTS, and other module implementations, extending new features at any time
92
+
93
+
94
+ ## 👥 User Reviews
95
+ > Thanks to the developer for open-sourcing and sharing the girlfriend for everyone to use
96
+ >
97
+ > This girlfriend has been used over 100,000 times
98
+
99
+
100
+ ## 🚀 Quick Start
101
+
102
+ Please refer to the [Quick Start](https://open-llm-vtuber.github.io/docs/quick-start) section in our documentation for installation.
103
+
104
+
105
+
106
+ ## ☝ Update
107
+ > :warning: `v1.0.0` has breaking changes and requires re-deployment. You *may* still update via the method below, but the `conf.yaml` file is incompatible and most of the dependencies needs to be reinstalled with `uv`. For those who came from versions before `v1.0.0`, I recommend deploy this project again with the [latest deployment guide](https://open-llm-vtuber.github.io/docs/quick-start).
108
+
109
+ Please use `uv run update.py` to update if you installed any versions later than `v1.0.0`.
110
+
111
+ ## 😢 Uninstall
112
+ Most files, including Python dependencies and models, are stored in the project folder.
113
+
114
+ However, models downloaded via ModelScope or Hugging Face may also be in `MODELSCOPE_CACHE` or `HF_HOME`. While we aim to keep them in the project's `models` directory, it's good to double-check.
115
+
116
+ Review the installation guide for any extra tools you no longer need, such as `uv`, `ffmpeg`, or `deeplx`.
117
+
118
+ ## 🤗 Want to contribute?
119
+ Checkout the [development guide](https://docs.llmvtuber.com/docs/development-guide/overview).
120
+
121
+
122
+ # 🎉🎉🎉 Related Projects
123
+
124
+ [ylxmf2005/LLM-Live2D-Desktop-Assitant](https://github.com/ylxmf2005/LLM-Live2D-Desktop-Assitant)
125
+ - Your Live2D desktop assistant powered by LLM! Available for both Windows and MacOS, it senses your screen, retrieves clipboard content, and responds to voice commands with a unique voice. Featuring voice wake-up, singing capabilities, and full computer control for seamless interaction with your favorite character.
126
+
127
+
128
+
129
+
130
+
131
+
132
+ ## 📜 Third-Party Licenses
133
+
134
+ ### Live2D Sample Models Notice
135
+
136
+ This project includes Live2D sample models provided by Live2D Inc. These assets are licensed separately under the Live2D Free Material License Agreement and the Terms of Use for Live2D Cubism Sample Data. They are not covered by the MIT license of this project.
137
+
138
+ This content uses sample data owned and copyrighted by Live2D Inc. The sample data are utilized in accordance with the terms and conditions set by Live2D Inc. (See [Live2D Free Material License Agreement](https://www.live2d.jp/en/terms/live2d-free-material-license-agreement/) and [Terms of Use](https://www.live2d.com/eula/live2d-sample-model-terms_en.html)).
139
+
140
+ Note: For commercial use, especially by medium or large-scale enterprises, the use of these Live2D sample models may be subject to additional licensing requirements. If you plan to use this project commercially, please ensure that you have the appropriate permissions from Live2D Inc., or use versions of the project without these models.
141
+
142
+
143
+ ## Contributors
144
+ Thanks our contributors and maintainers for making this project possible.
145
+
146
+ <a href="https://github.com/Open-LLM-VTuber/Open-LLM-VTuber/graphs/contributors">
147
+ <img src="https://contrib.rocks/image?repo=Open-LLM-VTuber/Open-LLM-VTuber" />
148
+ </a>
149
+
150
+
151
+ ## Star History
152
+
153
+ [![Star History Chart](https://api.star-history.com/svg?repos=Open-LLM-VTuber/open-llm-vtuber&type=Date)](https://star-history.com/#Open-LLM-VTuber/open-llm-vtuber&Date)
154
+
155
+
156
+
157
+
158
+
doc/README.md ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ For full documentation, please visit our [documentation site](https://open-llm-vtuber.github.io/) or view the [source repository](https://github.com/Open-LLM-VTuber/open-llm-vtuber.github.io).
2
+
3
+ > **Note:**
4
+ > The `sample_conf` directory contains legacy sample configuration files for running various models with sherpa-onnx. These files are deprecated and will be removed after we extract the relevant sherpa-onnx information.
doc/sample_conf/sherpaASRTTS_sense_voice_melo.yaml ADDED
@@ -0,0 +1,78 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ SYSTEM_CONFIG:
2
+ CONF_NAME: "sherpaASRTTS_sense_voice_melo"
3
+ CONF_UID: "sherpaASRTTS_sense_voice_melo"
4
+
5
+ # ============== Voice Interaction Settings ==============
6
+
7
+ # === Automatic Speech Recognition ===
8
+ VOICE_INPUT_ON: True
9
+ # Put your mic in the browser or in the terminal? (would increase latency)
10
+ MIC_IN_BROWSER: False # Deprecated and useless now. Do not enable it. Bad things will happen.
11
+
12
+ # speech to text model options: "Faster-Whisper", "WhisperCPP", "Whisper", "AzureASR", "FunASR", "GroqWhisperASR", "SherpaOnnxASR"
13
+ ASR_MODEL: "SherpaOnnxASR"
14
+
15
+ # pip install sherpa-onnx
16
+ # documentation: https://k2-fsa.github.io/sherpa/onnx/index.html
17
+ # ASR models download: https://github.com/k2-fsa/sherpa-onnx/releases/tag/asr-models
18
+ SherpaOnnxASR:
19
+ model_type: "sense_voice" # "transducer", "paraformer", "nemo_ctc", "wenet_ctc", "whisper", "tdnn_ctc"
20
+ # Choose only ONE of the following, depending on the model_type:
21
+ # --- For model_type: "transducer" ---
22
+ # encoder: "" # Path to the encoder model (e.g., "path/to/encoder.onnx")
23
+ # decoder: "" # Path to the decoder model (e.g., "path/to/decoder.onnx")
24
+ # joiner: "" # Path to the joiner model (e.g., "path/to/joiner.onnx")
25
+ # --- For model_type: "paraformer" ---
26
+ # paraformer: "" # Path to the paraformer model (e.g., "path/to/model.onnx")
27
+ # --- For model_type: "nemo_ctc" ---
28
+ # nemo_ctc: "" # Path to the NeMo CTC model (e.g., "path/to/model.onnx")
29
+ # --- For model_type: "wenet_ctc" ---
30
+ # wenet_ctc: "" # Path to the WeNet CTC model (e.g., "path/to/model.onnx")
31
+ # --- For model_type: "tdnn_ctc" ---
32
+ # tdnn_model: "" # Path to the TDNN CTC model (e.g., "path/to/model.onnx")
33
+ # --- For model_type: "whisper" ---
34
+ # whisper_encoder: "" # Path to the Whisper encoder model (e.g., "path/to/encoder.onnx")
35
+ # whisper_decoder: "" # Path to the Whisper decoder model (e.g., "path/to/decoder.onnx")
36
+ # --- For model_type: "sense_voice" ---
37
+ sense_voice: "/path/to/asr-models/sherpa-onnx-sense-voice-zh-en-ja-ko-yue-2024-07-17/model.onnx" # Path to the SenseVoice model (e.g., "path/to/model.onnx")
38
+ tokens: "/path/to/asr-models/sherpa-onnx-sense-voice-zh-en-ja-ko-yue-2024-07-17/tokens.txt" # Path to tokens.txt (required for all model types)
39
+ # --- Optional parameters (with defaults shown) ---
40
+ # hotwords_file: "" # Path to hotwords file (if using hotwords)
41
+ # hotwords_score: 1.5 # Score for hotwords
42
+ # modeling_unit: "" # Modeling unit for hotwords (if applicable)
43
+ # bpe_vocab: "" # Path to BPE vocabulary (if applicable)
44
+ num_threads: 4 # Number of threads
45
+ # whisper_language: "" # Language for Whisper models (e.g., "en", "zh", etc. - if using Whisper)
46
+ # whisper_task: "transcribe" # Task for Whisper models ("transcribe" or "translate" - if using Whisper)
47
+ # whisper_tail_paddings: -1 # Tail padding for Whisper models (if using Whisper)
48
+ # blank_penalty: 0.0 # Penalty for blank symbol
49
+ # decoding_method: "greedy_search" # "greedy_search" or "modified_beam_search"
50
+ # debug: False # Enable debug mode
51
+ # sample_rate: 16000 # Sample rate (should match the model's expected sample rate)
52
+ # feature_dim: 80 # Feature dimension (should match the model's expected feature dimension)
53
+ use_itn: True # Enable ITN for SenseVoice models (should set to False if not using SenseVoice models)
54
+
55
+ # ============== Text to Speech ==============
56
+ TTS_MODEL: "SherpaOnnxTTS"
57
+ # text to speech model options:
58
+ # "AzureTTS", "pyttsx3TTS", "edgeTTS", "barkTTS",
59
+ # "cosyvoiceTTS", "meloTTS", "piperTTS", "coquiTTS",
60
+ # "fishAPITTS", "SherpaOnnxTTS"
61
+
62
+
63
+ # pip install sherpa-onnx
64
+ # documentation: https://k2-fsa.github.io/sherpa/onnx/index.html
65
+ # TTS models download: https://github.com/k2-fsa/sherpa-onnx/releases/tag/tts-models
66
+ SherpaOnnxTTS:
67
+ vits_model: "/path/to/tts-models/vits-melo-tts-zh_en/model.onnx" # Path to VITS model file
68
+ vits_lexicon: "/path/to/tts-models/vits-melo-tts-zh_en/lexicon.txt" # Path to lexicon file (optional)
69
+ vits_tokens: "/path/to/tts-models/vits-melo-tts-zh_en/tokens.txt" # Path to tokens file
70
+ vits_data_dir: "" # "/path/to/tts-models/vits-piper-en_GB-cori-high/espeak-ng-data" # Path to espeak-ng data (optional)
71
+ vits_dict_dir: "/path/to/tts-models/vits-melo-tts-zh_en/dict" # Path to Jieba dict (optional, for Chinese)
72
+ tts_rule_fsts: "/path/to/tts-models/vits-melo-tts-zh_en/number.fst,/path/to/tts-models/vits-melo-tts-zh_en/phone.fst,/path/to/tts-models/vits-melo-tts-zh_en/date.fst,/path/to/tts-models/vits-melo-tts-zh_en/new_heteronym.fst" # Path to rule FSTs file (optional)
73
+ max_num_sentences: 2 # Max sentences per batch (or -1 for all)
74
+ sid: 1 # Speaker ID (for multi-speaker models)
75
+ provider: "cpu" # Use "cpu", "cuda" (GPU), or "coreml" (Apple)
76
+ num_threads: 1 # Number of computation threads
77
+ speed: 1.0 # Speech speed (1.0 is normal)
78
+ debug: false # Enable debug mode (True/False)
doc/sample_conf/sherpaASRTTS_sense_voice_piper_en.yaml ADDED
@@ -0,0 +1,77 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ SYSTEM_CONFIG:
2
+ CONF_NAME: "sherpaASRTTS_sense_voice_piper_en"
3
+ CONF_UID: "sherpaASRTTS_sense_voice_piper_en"
4
+
5
+ # ============== Voice Interaction Settings ==============
6
+
7
+ # === Automatic Speech Recognition ===
8
+ VOICE_INPUT_ON: True
9
+ # Put your mic in the browser or in the terminal? (would increase latency)
10
+ MIC_IN_BROWSER: False # Deprecated and useless now. Do not enable it. Bad things will happen.
11
+
12
+ # speech to text model options: "Faster-Whisper", "WhisperCPP", "Whisper", "AzureASR", "FunASR", "GroqWhisperASR", "SherpaOnnxASR"
13
+ ASR_MODEL: "SherpaOnnxASR"
14
+
15
+ # pip install sherpa-onnx
16
+ # documentation: https://k2-fsa.github.io/sherpa/onnx/index.html
17
+ # ASR models download: https://github.com/k2-fsa/sherpa-onnx/releases/tag/asr-models
18
+ SherpaOnnxASR:
19
+ model_type: "sense_voice" # "transducer", "paraformer", "nemo_ctc", "wenet_ctc", "whisper", "tdnn_ctc"
20
+ # Choose only ONE of the following, depending on the model_type:
21
+ # --- For model_type: "transducer" ---
22
+ # encoder: "" # Path to the encoder model (e.g., "path/to/encoder.onnx")
23
+ # decoder: "" # Path to the decoder model (e.g., "path/to/decoder.onnx")
24
+ # joiner: "" # Path to the joiner model (e.g., "path/to/joiner.onnx")
25
+ # --- For model_type: "paraformer" ---
26
+ # paraformer: "" # Path to the paraformer model (e.g., "path/to/model.onnx")
27
+ # --- For model_type: "nemo_ctc" ---
28
+ # nemo_ctc: "" # Path to the NeMo CTC model (e.g., "path/to/model.onnx")
29
+ # --- For model_type: "wenet_ctc" ---
30
+ # wenet_ctc: "" # Path to the WeNet CTC model (e.g., "path/to/model.onnx")
31
+ # --- For model_type: "tdnn_ctc" ---
32
+ # tdnn_model: "" # Path to the TDNN CTC model (e.g., "path/to/model.onnx")
33
+ # --- For model_type: "whisper" ---
34
+ # whisper_encoder: "" # Path to the Whisper encoder model (e.g., "path/to/encoder.onnx")
35
+ # whisper_decoder: "" # Path to the Whisper decoder model (e.g., "path/to/decoder.onnx")
36
+ # --- For model_type: "sense_voice" ---
37
+ sense_voice: "/path/to/asr-models/sherpa-onnx-sense-voice-zh-en-ja-ko-yue-2024-07-17/model.onnx" # Path to the SenseVoice model (e.g., "path/to/model.onnx")
38
+ tokens: "/path/to/asr-models/sherpa-onnx-sense-voice-zh-en-ja-ko-yue-2024-07-17/tokens.txt" # Path to tokens.txt (required for all model types)
39
+ # --- Optional parameters (with defaults shown) ---
40
+ # hotwords_file: "" # Path to hotwords file (if using hotwords)
41
+ # hotwords_score: 1.5 # Score for hotwords
42
+ # modeling_unit: "" # Modeling unit for hotwords (if applicable)
43
+ # bpe_vocab: "" # Path to BPE vocabulary (if applicable)
44
+ num_threads: 4 # Number of threads
45
+ # whisper_language: "" # Language for Whisper models (e.g., "en", "zh", etc. - if using Whisper)
46
+ # whisper_task: "transcribe" # Task for Whisper models ("transcribe" or "translate" - if using Whisper)
47
+ # whisper_tail_paddings: -1 # Tail padding for Whisper models (if using Whisper)
48
+ # blank_penalty: 0.0 # Penalty for blank symbol
49
+ # decoding_method: "greedy_search" # "greedy_search" or "modified_beam_search"
50
+ # debug: False # Enable debug mode
51
+ # sample_rate: 16000 # Sample rate (should match the model's expected sample rate)
52
+ # feature_dim: 80 # Feature dimension (should match the model's expected feature dimension)
53
+ use_itn: True # Enable ITN for SenseVoice models (should set to False if not using SenseVoice models)
54
+
55
+ # ============== Text to Speech ==============
56
+ TTS_MODEL: "SherpaOnnxTTS"
57
+ # text to speech model options:
58
+ # "AzureTTS", "pyttsx3TTS", "edgeTTS", "barkTTS",
59
+ # "cosyvoiceTTS", "meloTTS", "piperTTS", "coquiTTS",
60
+ # "fishAPITTS", "SherpaOnnxTTS"
61
+
62
+ # pip install sherpa-onnx
63
+ # documentation: https://k2-fsa.github.io/sherpa/onnx/index.html
64
+ # TTS models download: https://github.com/k2-fsa/sherpa-onnx/releases/tag/tts-models
65
+ SherpaOnnxTTS:
66
+ vits_model: "/path/to/tts-models/vits-piper-en_GB-cori-high/en_GB-cori-high.onnx" # Path to VITS model file
67
+ vits_lexicon: "" # Path to lexicon file (optional)
68
+ vits_tokens: "/path/to/tts-models/vits-piper-en_GB-cori-high/tokens.txt" # Path to tokens file
69
+ vits_data_dir: "/path/to/tts-models/vits-piper-en_GB-cori-high/espeak-ng-data" # Path to espeak-ng data (optional)
70
+ vits_dict_dir: "" # Path to Jieba dict (optional, for Chinese)
71
+ tts_rule_fsts: "" # Path to rule FSTs file (optional)
72
+ max_num_sentences: 2 # Max sentences per batch (or -1 for all)
73
+ sid: 0 # Speaker ID (for multi-speaker models)
74
+ provider: "cpu" # Use "cpu", "cuda" (GPU), or "coreml" (Apple)
75
+ num_threads: 1 # Number of computation threads
76
+ speed: 1.0 # Speech speed (1.0 is normal)
77
+ debug: false # Enable debug mode (True/False)
doc/sample_conf/sherpaASRTTS_sense_voice_vits_zh.yaml ADDED
@@ -0,0 +1,77 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ SYSTEM_CONFIG:
2
+ CONF_NAME: "sherpaASRTTS_sense_voice_vits_zh"
3
+ CONF_UID: "sherpaASRTTS_sense_voice_vits_zh"
4
+
5
+ # ============== Voice Interaction Settings ==============
6
+
7
+ # === Automatic Speech Recognition ===
8
+ VOICE_INPUT_ON: True
9
+ # Put your mic in the browser or in the terminal? (would increase latency)
10
+ MIC_IN_BROWSER: False # Deprecated and useless now. Do not enable it. Bad things will happen.
11
+
12
+ # speech to text model options: "Faster-Whisper", "WhisperCPP", "Whisper", "AzureASR", "FunASR", "GroqWhisperASR", "SherpaOnnxASR"
13
+ ASR_MODEL: "SherpaOnnxASR"
14
+
15
+ # pip install sherpa-onnx
16
+ # documentation: https://k2-fsa.github.io/sherpa/onnx/index.html
17
+ # ASR models download: https://github.com/k2-fsa/sherpa-onnx/releases/tag/asr-models
18
+ SherpaOnnxASR:
19
+ model_type: "sense_voice" # "transducer", "paraformer", "nemo_ctc", "wenet_ctc", "whisper", "tdnn_ctc"
20
+ # Choose only ONE of the following, depending on the model_type:
21
+ # --- For model_type: "transducer" ---
22
+ # encoder: "" # Path to the encoder model (e.g., "path/to/encoder.onnx")
23
+ # decoder: "" # Path to the decoder model (e.g., "path/to/decoder.onnx")
24
+ # joiner: "" # Path to the joiner model (e.g., "path/to/joiner.onnx")
25
+ # --- For model_type: "paraformer" ---
26
+ # paraformer: "" # Path to the paraformer model (e.g., "path/to/model.onnx")
27
+ # --- For model_type: "nemo_ctc" ---
28
+ # nemo_ctc: "" # Path to the NeMo CTC model (e.g., "path/to/model.onnx")
29
+ # --- For model_type: "wenet_ctc" ---
30
+ # wenet_ctc: "" # Path to the WeNet CTC model (e.g., "path/to/model.onnx")
31
+ # --- For model_type: "tdnn_ctc" ---
32
+ # tdnn_model: "" # Path to the TDNN CTC model (e.g., "path/to/model.onnx")
33
+ # --- For model_type: "whisper" ---
34
+ # whisper_encoder: "" # Path to the Whisper encoder model (e.g., "path/to/encoder.onnx")
35
+ # whisper_decoder: "" # Path to the Whisper decoder model (e.g., "path/to/decoder.onnx")
36
+ # --- For model_type: "sense_voice" ---
37
+ sense_voice: "/path/to/asr-models/sherpa-onnx-sense-voice-zh-en-ja-ko-yue-2024-07-17/model.onnx" # Path to the SenseVoice model (e.g., "path/to/model.onnx")
38
+ tokens: "/path/to/asr-models/sherpa-onnx-sense-voice-zh-en-ja-ko-yue-2024-07-17/tokens.txt" # Path to tokens.txt (required for all model types)
39
+ # --- Optional parameters (with defaults shown) ---
40
+ # hotwords_file: "" # Path to hotwords file (if using hotwords)
41
+ # hotwords_score: 1.5 # Score for hotwords
42
+ # modeling_unit: "" # Modeling unit for hotwords (if applicable)
43
+ # bpe_vocab: "" # Path to BPE vocabulary (if applicable)
44
+ num_threads: 4 # Number of threads
45
+ # whisper_language: "" # Language for Whisper models (e.g., "en", "zh", etc. - if using Whisper)
46
+ # whisper_task: "transcribe" # Task for Whisper models ("transcribe" or "translate" - if using Whisper)
47
+ # whisper_tail_paddings: -1 # Tail padding for Whisper models (if using Whisper)
48
+ # blank_penalty: 0.0 # Penalty for blank symbol
49
+ # decoding_method: "greedy_search" # "greedy_search" or "modified_beam_search"
50
+ # debug: False # Enable debug mode
51
+ # sample_rate: 16000 # Sample rate (should match the model's expected sample rate)
52
+ # feature_dim: 80 # Feature dimension (should match the model's expected feature dimension)
53
+ use_itn: True # Enable ITN for SenseVoice models (should set to False if not using SenseVoice models)
54
+
55
+ # ============== Text to Speech ==============
56
+ TTS_MODEL: "SherpaOnnxTTS"
57
+ # text to speech model options:
58
+ # "AzureTTS", "pyttsx3TTS", "edgeTTS", "barkTTS",
59
+ # "cosyvoiceTTS", "meloTTS", "piperTTS", "coquiTTS",
60
+ # "fishAPITTS", "SherpaOnnxTTS"
61
+
62
+ # pip install sherpa-onnx
63
+ # documentation: https://k2-fsa.github.io/sherpa/onnx/index.html
64
+ # TTS models download: https://github.com/k2-fsa/sherpa-onnx/releases/tag/tts-models
65
+ SherpaOnnxTTS:
66
+ vits_model: "/path/to/tts-models/sherpa-onnx-vits-zh-ll/model.onnx" # Path to VITS model file
67
+ vits_lexicon: "/path/to/tts-models/sherpa-onnx-vits-zh-ll/lexicon.txt" # Path to lexicon file (optional)
68
+ vits_tokens: "/path/to/tts-models/sherpa-onnx-vits-zh-ll/tokens.txt" # Path to tokens file
69
+ vits_data_dir: "" # "/path/to/tts-models/vits-piper-en_GB-cori-high/espeak-ng-data" # Path to espeak-ng data (optional)
70
+ vits_dict_dir: "/path/to/tts-models/sherpa-onnx-vits-zh-ll/dict" # Path to Jieba dict (optional, for Chinese)
71
+ tts_rule_fsts: "/path/to/tts-models/sherpa-onnx-vits-zh-ll/number.fst,/path/to/tts-models/sherpa-onnx-vits-zh-ll/phone.fst,/path/to/tts-models/sherpa-onnx-vits-zh-ll/date.fst" # Path to rule FSTs file (optional)
72
+ max_num_sentences: 2 # Max sentences per batch (or -1 for all)
73
+ sid: 0 # Speaker ID (for multi-speaker models) 0-4
74
+ provider: "cpu" # Use "cpu", "cuda" (GPU), or "coreml" (Apple)
75
+ num_threads: 1 # Number of computation threads
76
+ speed: 1.0 # Speech speed (1.0 is normal)
77
+ debug: false # Enable debug mode (True/False)
doc/sample_conf/sherpaASR_paraformer.yaml ADDED
@@ -0,0 +1,65 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ SYSTEM_CONFIG:
2
+ CONF_NAME: "sherpaASR_paraformer"
3
+ CONF_UID: "sherpaASR_paraformer"
4
+
5
+ # ============== Voice Interaction Settings ==============
6
+
7
+ # === Automatic Speech Recognition ===
8
+ VOICE_INPUT_ON: True
9
+ # Put your mic in the browser or in the terminal? (would increase latency)
10
+ MIC_IN_BROWSER: False # Deprecated and useless now. Do not enable it. Bad things will happen.
11
+
12
+ # speech to text model options: "Faster-Whisper", "WhisperCPP", "Whisper", "AzureASR", "FunASR", "GroqWhisperASR", "SherpaOnnxASR"
13
+ ASR_MODEL: "SherpaOnnxASR"
14
+
15
+ # pip install sherpa-onnx
16
+ # documentation: https://k2-fsa.github.io/sherpa/onnx/index.html
17
+ # ASR models download: https://github.com/k2-fsa/sherpa-onnx/releases/tag/asr-models
18
+ SherpaOnnxASR:
19
+ model_type: "paraformer" # "transducer", "paraformer", "nemo_ctc", "wenet_ctc", "whisper", "tdnn_ctc"
20
+ # Choose only ONE of the following, depending on the model_type:
21
+ # --- For model_type: "transducer" ---
22
+ # encoder: "" # Path to the encoder model (e.g., "path/to/encoder.onnx")
23
+ # decoder: "" # Path to the decoder model (e.g., "path/to/decoder.onnx")
24
+ # joiner: "" # Path to the joiner model (e.g., "path/to/joiner.onnx")
25
+ # --- For model_type: "paraformer" ---
26
+ paraformer: "/path/to/asr-models/sherpa-onnx-paraformer-zh-2024-03-09/model.onnx" # Path to the paraformer model (e.g., "path/to/model.onnx")
27
+ # --- For model_type: "nemo_ctc" ---
28
+ # nemo_ctc: "" # Path to the NeMo CTC model (e.g., "path/to/model.onnx")
29
+ # --- For model_type: "wenet_ctc" ---
30
+ # wenet_ctc: "" # Path to the WeNet CTC model (e.g., "path/to/model.onnx")
31
+ # --- For model_type: "tdnn_ctc" ---
32
+ # tdnn_model: "" # Path to the TDNN CTC model (e.g., "path/to/model.onnx")
33
+ # --- For model_type: "whisper" ---
34
+ # whisper_encoder: "" # Path to the Whisper encoder model (e.g., "path/to/encoder.onnx")
35
+ # whisper_decoder: "" # Path to the Whisper decoder model (e.g., "path/to/decoder.onnx")
36
+ # --- For model_type: "sense_voice" ---
37
+ # sense_voice: "/path/to/sherpa-onnx-sense-voice-zh-en-ja-ko-yue-2024-07-17/model.onnx" # Path to the SenseVoice model (e.g., "path/to/model.onnx")
38
+ tokens: "/path/to/asr-models/sherpa-onnx-paraformer-zh-2024-03-09/tokens.txt" # Path to tokens.txt (required for all model types)
39
+ # --- Optional parameters (with defaults shown) ---
40
+ # hotwords_file: "" # Path to hotwords file (if using hotwords)
41
+ # hotwords_score: 1.5 # Score for hotwords
42
+ # modeling_unit: "" # Modeling unit for hotwords (if applicable)
43
+ # bpe_vocab: "" # Path to BPE vocabulary (if applicable)
44
+ num_threads: 2 # Number of threads
45
+ # whisper_language: "" # Language for Whisper models (e.g., "en", "zh", etc. - if using Whisper)
46
+ # whisper_task: "transcribe" # Task for Whisper models ("transcribe" or "translate" - if using Whisper)
47
+ # whisper_tail_paddings: -1 # Tail padding for Whisper models (if using Whisper)
48
+ # blank_penalty: 0.0 # Penalty for blank symbol
49
+ # decoding_method: "greedy_search" # "greedy_search" or "modified_beam_search"
50
+ # debug: False # Enable debug mode
51
+ # sample_rate: 16000 # Sample rate (should match the model's expected sample rate)
52
+ # feature_dim: 80 # Feature dimension (should match the model's expected feature dimension)
53
+ # use_itn: True # Enable ITN for SenseVoice models (should set to False if not using SenseVoice models)
54
+
55
+ # ============== Text to Speech ==============
56
+ TTS_MODEL: "edgeTTS"
57
+ # text to speech model options:
58
+ # "AzureTTS", "pyttsx3TTS", "edgeTTS", "barkTTS",
59
+ # "cosyvoiceTTS", "meloTTS", "piperTTS", "coquiTTS",
60
+ # "fishAPITTS", "SherpaOnnxTTS"
61
+
62
+ edgeTTS:
63
+ # Check out doc at https://github.com/rany2/edge-tts
64
+ # Use `edge-tts --list-voices` to list all available voices
65
+ voice: "en-US-AvaMultilingualNeural" #"zh-CN-XiaoxiaoNeural" # "ja-JP-NanamiNeural"
doc/sample_conf/sherpaASR_sense_voice.yaml ADDED
@@ -0,0 +1,67 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ SYSTEM_CONFIG:
2
+ CONF_NAME: "sherpaASR_sense_voice"
3
+ CONF_UID: "sherpaASR_sense_voice"
4
+
5
+ # ============== Voice Interaction Settings ==============
6
+
7
+ # === Automatic Speech Recognition ===
8
+ VOICE_INPUT_ON: True
9
+ # Put your mic in the browser or in the terminal? (would increase latency)
10
+ MIC_IN_BROWSER: False # Deprecated and useless now. Do not enable it. Bad things will happen.
11
+
12
+ # speech to text model options: "Faster-Whisper", "WhisperCPP", "Whisper", "AzureASR", "FunASR", "GroqWhisperASR", "SherpaOnnxASR"
13
+ ASR_MODEL: "SherpaOnnxASR"
14
+
15
+ # pip install sherpa-onnx
16
+ # documentation: https://k2-fsa.github.io/sherpa/onnx/index.html
17
+ # ASR models download: https://github.com/k2-fsa/sherpa-onnx/releases/tag/asr-models
18
+ SherpaOnnxASR:
19
+ model_type: "sense_voice" # "transducer", "paraformer", "nemo_ctc", "wenet_ctc", "whisper", "tdnn_ctc"
20
+ # Choose only ONE of the following, depending on the model_type:
21
+ # --- For model_type: "transducer" ---
22
+ # encoder: "" # Path to the encoder model (e.g., "path/to/encoder.onnx")
23
+ # decoder: "" # Path to the decoder model (e.g., "path/to/decoder.onnx")
24
+ # joiner: "" # Path to the joiner model (e.g., "path/to/joiner.onnx")
25
+ # --- For model_type: "paraformer" ---
26
+ # paraformer: "" # Path to the paraformer model (e.g., "path/to/model.onnx")
27
+ # --- For model_type: "nemo_ctc" ---
28
+ # nemo_ctc: "" # Path to the NeMo CTC model (e.g., "path/to/model.onnx")
29
+ # --- For model_type: "wenet_ctc" ---
30
+ # wenet_ctc: "" # Path to the WeNet CTC model (e.g., "path/to/model.onnx")
31
+ # --- For model_type: "tdnn_ctc" ---
32
+ # tdnn_model: "" # Path to the TDNN CTC model (e.g., "path/to/model.onnx")
33
+ # --- For model_type: "whisper" ---
34
+ # whisper_encoder: "" # Path to the Whisper encoder model (e.g., "path/to/encoder.onnx")
35
+ # whisper_decoder: "" # Path to the Whisper decoder model (e.g., "path/to/decoder.onnx")
36
+ # --- For model_type: "sense_voice" ---
37
+ sense_voice: "/path/to/asr-models/sherpa-onnx-sense-voice-zh-en-ja-ko-yue-2024-07-17/model.onnx" # Path to the SenseVoice model (e.g., "path/to/model.onnx")
38
+ tokens: "/path/to/asr-models/sherpa-onnx-sense-voice-zh-en-ja-ko-yue-2024-07-17/tokens.txt" # Path to tokens.txt (required for all model types)
39
+ # --- Optional parameters (with defaults shown) ---
40
+ # hotwords_file: "" # Path to hotwords file (if using hotwords)
41
+ # hotwords_score: 1.5 # Score for hotwords
42
+ # modeling_unit: "" # Modeling unit for hotwords (if applicable)
43
+ # bpe_vocab: "" # Path to BPE vocabulary (if applicable)
44
+ num_threads: 2 # Number of threads
45
+ # whisper_language: "" # Language for Whisper models (e.g., "en", "zh", etc. - if using Whisper)
46
+ # whisper_task: "transcribe" # Task for Whisper models ("transcribe" or "translate" - if using Whisper)
47
+ # whisper_tail_paddings: -1 # Tail padding for Whisper models (if using Whisper)
48
+ # blank_penalty: 0.0 # Penalty for blank symbol
49
+ # decoding_method: "greedy_search" # "greedy_search" or "modified_beam_search"
50
+ # debug: False # Enable debug mode
51
+ # sample_rate: 16000 # Sample rate (should match the model's expected sample rate)
52
+ # feature_dim: 80 # Feature dimension (should match the model's expected feature dimension)
53
+ use_itn: True # Enable ITN for SenseVoice models (should set to False if not using SenseVoice models)
54
+
55
+ # ============== Text to Speech ==============
56
+ TTS_MODEL: "edgeTTS"
57
+ # text to speech model options:
58
+ # "AzureTTS", "pyttsx3TTS", "edgeTTS", "barkTTS",
59
+ # "cosyvoiceTTS", "meloTTS", "piperTTS", "coquiTTS",
60
+ # "fishAPITTS"
61
+
62
+
63
+ edgeTTS:
64
+ # Check out doc at https://github.com/rany2/edge-tts
65
+ # Use `edge-tts --list-voices` to list all available voices
66
+ voice: "en-US-AvaMultilingualNeural" #"zh-CN-XiaoxiaoNeural" # "ja-JP-NanamiNeural"
67
+
model_dict.json ADDED
@@ -0,0 +1,30 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [
2
+ {
3
+ "name": "mao_pro",
4
+ "description": "",
5
+ "url": "/live2d-models/mao_pro/runtime/mao_pro.model3.json",
6
+ "kScale": 0.5,
7
+ "initialXshift": 0,
8
+ "initialYshift": 0,
9
+ "kXOffset": 1150,
10
+ "idleMotionGroupName": "Idle",
11
+ "emotionMap": {
12
+ "neutral": 0,
13
+ "anger": 2,
14
+ "disgust": 2,
15
+ "fear": 1,
16
+ "joy": 3,
17
+ "smirk": 3,
18
+ "sadness": 1,
19
+ "surprise": 3
20
+ },
21
+ "tapMotions": {
22
+ "HitAreaHead": {
23
+ "": 1
24
+ },
25
+ "HitAreaBody": {
26
+ "": 1
27
+ }
28
+ }
29
+ }
30
+ ]
pixi.lock ADDED
@@ -0,0 +1,1652 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ version: 6
2
+ environments:
3
+ default:
4
+ channels:
5
+ - url: https://conda.anaconda.org/conda-forge/
6
+ indexes:
7
+ - https://pypi.org/simple
8
+ packages:
9
+ linux-64:
10
+ - conda: https://conda.anaconda.org/conda-forge/linux-64/_libgcc_mutex-0.1-conda_forge.tar.bz2
11
+ - conda: https://conda.anaconda.org/conda-forge/linux-64/_openmp_mutex-4.5-2_gnu.tar.bz2
12
+ - conda: https://conda.anaconda.org/conda-forge/linux-64/bzip2-1.0.8-h4bc722e_7.conda
13
+ - conda: https://conda.anaconda.org/conda-forge/linux-64/ca-certificates-2025.1.31-hbcca054_0.conda
14
+ - conda: https://conda.anaconda.org/conda-forge/noarch/cuda-version-11.8-h70ddcb2_3.conda
15
+ - conda: https://conda.anaconda.org/conda-forge/linux-64/cudatoolkit-11.8.0-h4ba93d1_13.conda
16
+ - conda: https://conda.anaconda.org/conda-forge/linux-64/cudnn-8.9.7.29-hbc23b4c_3.conda
17
+ - conda: https://conda.anaconda.org/conda-forge/linux-64/ld_impl_linux-64-2.43-h712a8e2_2.conda
18
+ - conda: https://conda.anaconda.org/conda-forge/linux-64/libexpat-2.6.4-h5888daf_0.conda
19
+ - conda: https://conda.anaconda.org/conda-forge/linux-64/libffi-3.4.2-h7f98852_5.tar.bz2
20
+ - conda: https://conda.anaconda.org/conda-forge/linux-64/libgcc-14.2.0-h77fa898_1.conda
21
+ - conda: https://conda.anaconda.org/conda-forge/linux-64/libgcc-ng-14.2.0-h69a702a_1.conda
22
+ - conda: https://conda.anaconda.org/conda-forge/linux-64/libgomp-14.2.0-h77fa898_1.conda
23
+ - conda: https://conda.anaconda.org/conda-forge/linux-64/liblzma-5.6.4-hb9d3cd8_0.conda
24
+ - conda: https://conda.anaconda.org/conda-forge/linux-64/libnsl-2.0.1-hd590300_0.conda
25
+ - conda: https://conda.anaconda.org/conda-forge/linux-64/libsqlite-3.49.0-hee588c1_0.conda
26
+ - conda: https://conda.anaconda.org/conda-forge/linux-64/libstdcxx-14.2.0-hc0a3c3a_1.conda
27
+ - conda: https://conda.anaconda.org/conda-forge/linux-64/libstdcxx-ng-14.2.0-h4852527_1.conda
28
+ - conda: https://conda.anaconda.org/conda-forge/linux-64/libuuid-2.38.1-h0b41bf4_0.conda
29
+ - conda: https://conda.anaconda.org/conda-forge/linux-64/libxcrypt-4.4.36-hd590300_1.conda
30
+ - conda: https://conda.anaconda.org/conda-forge/linux-64/libzlib-1.3.1-hb9d3cd8_2.conda
31
+ - conda: https://conda.anaconda.org/conda-forge/linux-64/ncurses-6.5-h2d0b736_3.conda
32
+ - conda: https://conda.anaconda.org/conda-forge/linux-64/openssl-3.4.0-h7b32b05_1.conda
33
+ - conda: https://conda.anaconda.org/conda-forge/linux-64/python-3.12.8-h9e4cc4f_1_cpython.conda
34
+ - conda: https://conda.anaconda.org/conda-forge/linux-64/readline-8.2-h8228510_1.conda
35
+ - conda: https://conda.anaconda.org/conda-forge/linux-64/tk-8.6.13-noxft_h4845f30_101.conda
36
+ - conda: https://conda.anaconda.org/conda-forge/noarch/tzdata-2025a-h78e105d_0.conda
37
+ - pypi: https://files.pythonhosted.org/packages/44/4c/03fb05f56551828ec67ceb3665e5dc51638042d204983a03b0a1541475b6/aiohappyeyeballs-2.4.6-py3-none-any.whl
38
+ - pypi: https://files.pythonhosted.org/packages/17/e2/9f744cee0861af673dc271a3351f59ebd5415928e20080ab85be25641471/aiohttp-3.11.12-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
39
+ - pypi: https://files.pythonhosted.org/packages/ec/6a/bc7e17a3e87a2985d3e8f4da4cd0f481060eb78fb08596c42be62c90a4d9/aiosignal-1.3.2-py2.py3-none-any.whl
40
+ - pypi: https://files.pythonhosted.org/packages/78/b6/6307fbef88d9b5ee7421e68d78a9f162e0da4900bc5f5793f6d3d0e34fb8/annotated_types-0.7.0-py3-none-any.whl
41
+ - pypi: https://files.pythonhosted.org/packages/74/86/e81814e542d1eaeec84d2312bec93a99b9ef1d78d9bfae1fc5dd74abdf15/anthropic-0.45.2-py3-none-any.whl
42
+ - pypi: https://files.pythonhosted.org/packages/46/eb/e7f063ad1fec6b3178a3cd82d1a3c4de82cccf283fc42746168188e1cdd5/anyio-4.8.0-py3-none-any.whl
43
+ - pypi: https://files.pythonhosted.org/packages/fc/30/d4986a882011f9df997a55e6becd864812ccfcd821d64aac8570ee39f719/attrs-25.1.0-py3-none-any.whl
44
+ - pypi: https://files.pythonhosted.org/packages/83/f7/9241ad7154e554730ea56271e14ad1115c278b26a81eb892eac16fabb480/azure_cognitiveservices_speech-1.42.0-py3-none-manylinux1_x86_64.whl
45
+ - pypi: https://files.pythonhosted.org/packages/38/fc/bce832fd4fd99766c04d1ee0eead6b0ec6486fb100ae5e74c1d91292b982/certifi-2025.1.31-py3-none-any.whl
46
+ - pypi: https://files.pythonhosted.org/packages/b2/d5/da47df7004cb17e4955df6a43d14b3b4ae77737dff8bf7f8f333196717bf/cffi-1.17.1-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
47
+ - pypi: https://files.pythonhosted.org/packages/38/6f/f5fbc992a329ee4e0f288c1fe0e2ad9485ed064cac731ed2fe47dcc38cbf/chardet-5.2.0-py3-none-any.whl
48
+ - pypi: https://files.pythonhosted.org/packages/3e/a2/513f6cbe752421f16d969e32f3583762bfd583848b763913ddab8d9bfd4f/charset_normalizer-3.4.1-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
49
+ - pypi: https://files.pythonhosted.org/packages/7e/d4/7ebdbd03970677812aac39c869717059dbb71a4cfc033ca6e5221787892c/click-8.1.8-py3-none-any.whl
50
+ - pypi: https://files.pythonhosted.org/packages/a7/06/3d6badcf13db419e25b07041d9c7b4a2c331d3f4e7134445ec5df57714cd/coloredlogs-15.0.1-py2.py3-none-any.whl
51
+ - pypi: https://files.pythonhosted.org/packages/12/b3/231ffd4ab1fc9d679809f356cebee130ac7daa00d6d6f3206dd4fd137e9e/distro-1.9.0-py3-none-any.whl
52
+ - pypi: https://files.pythonhosted.org/packages/f8/37/00c211f1021f9b04dde72dcbee72ce66248519c3899a47b06f8940a67c08/edge_tts-7.0.0-py3-none-any.whl
53
+ - pypi: https://files.pythonhosted.org/packages/8f/7d/2d6ce181d7a5f51dedb8c06206cbf0ec026a99bf145edd309f9e17c3282f/fastapi-0.115.8-py3-none-any.whl
54
+ - pypi: https://files.pythonhosted.org/packages/0e/e2/b066e6e02d67bf5261a6d7539648c6da3365cc9eff3eb6d82009595d84d9/flatbuffers-25.1.24-py2.py3-none-any.whl
55
+ - pypi: https://files.pythonhosted.org/packages/af/f2/64b73a9bb86f5a89fb55450e97cd5c1f84a862d4ff90d9fd1a73ab0f64a5/frozenlist-1.5.0-cp312-cp312-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl
56
+ - pypi: https://files.pythonhosted.org/packages/b0/6c/5a53d632b44ef7655ac8d9b34432e13160917f9307c94b1467efd34e336e/groq-0.18.0-py3-none-any.whl
57
+ - pypi: https://files.pythonhosted.org/packages/95/04/ff642e65ad6b90db43e668d70ffb6736436c7ce41fcc549f4e9472234127/h11-0.14.0-py3-none-any.whl
58
+ - pypi: https://files.pythonhosted.org/packages/87/f5/72347bc88306acb359581ac4d52f23c0ef445b57157adedb9aee0cd689d2/httpcore-1.0.7-py3-none-any.whl
59
+ - pypi: https://files.pythonhosted.org/packages/f7/d8/b644c44acc1368938317d76ac991c9bba1166311880bcc0ac297cb9d6bd7/httptools-0.6.4-cp312-cp312-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl
60
+ - pypi: https://files.pythonhosted.org/packages/2a/39/e50c7c3a983047577ee07d2a9e53faf5a69493943ec3f6a384bdc792deb2/httpx-0.28.1-py3-none-any.whl
61
+ - pypi: https://files.pythonhosted.org/packages/f0/0f/310fb31e39e2d734ccaa2c0fb981ee41f7bd5056ce9bc29b2248bd569169/humanfriendly-10.0-py2.py3-none-any.whl
62
+ - pypi: https://files.pythonhosted.org/packages/76/c6/c88e154df9c4e1a2a66ccf0005a88dfb2650c1dffb6f5ce603dfbd452ce3/idna-3.10-py3-none-any.whl
63
+ - pypi: https://files.pythonhosted.org/packages/17/61/beea645c0bf398ced8b199e377b61eb999d8e46e053bb285c91c3d3eaab0/jiter-0.8.2-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
64
+ - pypi: https://files.pythonhosted.org/packages/0e/72/a3add0e4eec4eb9e2569554f7c70f4a3c27712f40e3284d483e88094cc0e/langdetect-1.0.9.tar.gz
65
+ - pypi: https://files.pythonhosted.org/packages/0c/29/0348de65b8cc732daa3e33e67806420b2ae89bdce2b04af740289c5c6c8c/loguru-0.7.3-py3-none-any.whl
66
+ - pypi: https://files.pythonhosted.org/packages/43/e3/7d92a15f894aa0c9c4b49b8ee9ac9850d6e63b03c9c32c0367a13ae62209/mpmath-1.3.0-py3-none-any.whl
67
+ - pypi: https://files.pythonhosted.org/packages/d3/c8/529101d7176fe7dfe1d99604e48d69c5dfdcadb4f06561f465c8ef12b4df/multidict-6.1.0-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
68
+ - pypi: https://files.pythonhosted.org/packages/0f/50/de23fde84e45f5c4fda2488c759b69990fd4512387a8632860f3ac9cd225/numpy-1.26.4-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
69
+ - pypi: https://files.pythonhosted.org/packages/47/42/2f71f5680834688a9c81becbe5c5bb996fd33eaed5c66ae0606c3b1d6a02/onnxruntime-1.20.1-cp312-cp312-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl
70
+ - pypi: https://files.pythonhosted.org/packages/9a/b6/2e2a011b2dc27a6711376808b4cd8c922c476ea0f1420b39892117fa8563/openai-1.61.1-py3-none-any.whl
71
+ - pypi: https://files.pythonhosted.org/packages/88/ef/eb23f262cca3c0c4eb7ab1933c3b1f03d021f2c48f54763065b6f0e321be/packaging-24.2-py3-none-any.whl
72
+ - pypi: https://files.pythonhosted.org/packages/1c/07/ebe102777a830bca91bbb93e3479cd34c2ca5d0361b83be9dbd93104865e/propcache-0.2.1-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
73
+ - pypi: https://files.pythonhosted.org/packages/a8/45/2ebbde52ad2be18d3675b6bee50e68cd73c9e0654de77d595540b5129df8/protobuf-5.29.3-cp38-abi3-manylinux2014_x86_64.whl
74
+ - pypi: https://files.pythonhosted.org/packages/13/a3/a812df4e2dd5696d1f351d58b8fe16a405b234ad2886a0dab9183fb78109/pycparser-2.22-py3-none-any.whl
75
+ - pypi: https://files.pythonhosted.org/packages/f4/3c/8cc1cc84deffa6e25d2d0c688ebb80635dfdbf1dbea3e30c541c8cf4d860/pydantic-2.10.6-py3-none-any.whl
76
+ - pypi: https://files.pythonhosted.org/packages/8d/f0/49129b27c43396581a635d8710dae54a791b17dfc50c70164866bbf865e3/pydantic_core-2.27.2-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
77
+ - pypi: https://files.pythonhosted.org/packages/a6/53/d78dc063216e62fc55f6b2eebb447f6a4b0a59f55c8406376f76bf959b08/pydub-0.25.1-py2.py3-none-any.whl
78
+ - pypi: https://files.pythonhosted.org/packages/48/0a/c99fb7d7e176f8b176ef19704a32e6a9c6aafdf19ef75a187f701fc15801/pysbd-0.3.4-py3-none-any.whl
79
+ - pypi: https://files.pythonhosted.org/packages/6a/3e/b68c118422ec867fa7ab88444e1274aa40681c606d59ac27de5a5588f082/python_dotenv-1.0.1-py3-none-any.whl
80
+ - pypi: https://files.pythonhosted.org/packages/94/df/e1584757c736c4fba09a3fb4f22fe625cc3367b06c6ece221e4b8c1e3023/pyttsx3-2.98-py3-none-any.whl
81
+ - pypi: https://files.pythonhosted.org/packages/b9/2b/614b4752f2e127db5cc206abc23a8c19678e92b23c3db30fc86ab731d3bd/PyYAML-6.0.2-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
82
+ - pypi: https://files.pythonhosted.org/packages/f9/9b/335f9764261e915ed497fcdeb11df5dfd6f7bf257d4a6a2a686d80da4d54/requests-2.32.3-py3-none-any.whl
83
+ - pypi: https://files.pythonhosted.org/packages/04/70/e59c192a3ad476355e7f45fb3a87326f5219cc7c472e6b040c6c6595c8f0/ruff-0.9.5-py3-none-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
84
+ - pypi: https://files.pythonhosted.org/packages/b0/3c/0de11ca154e24a57b579fb648151d901326d3102115bc4f9a7a86526ce54/scipy-1.15.1-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
85
+ - pypi: https://files.pythonhosted.org/packages/48/77/a3771191d4bac619df7dc06db14a7b22dd0007548b71ee54a81f80e2d219/sherpa_onnx-1.10.43-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
86
+ - pypi: https://files.pythonhosted.org/packages/b7/ce/149a00dd41f10bc29e5921b496af8b574d8413afcd5e30dfa0ed46c2cc5e/six-1.17.0-py2.py3-none-any.whl
87
+ - pypi: https://files.pythonhosted.org/packages/e9/44/75a9c9421471a6c4805dbf2356f7c181a29c1879239abab1ea2cc8f38b40/sniffio-1.3.1-py3-none-any.whl
88
+ - pypi: https://files.pythonhosted.org/packages/57/5e/70bdd9579b35003a489fc850b5047beeda26328053ebadc1fb60f320f7db/soundfile-0.13.1-py2.py3-none-manylinux_2_28_x86_64.whl
89
+ - pypi: https://files.pythonhosted.org/packages/66/b7/4a1bc231e0681ebf339337b0cd05b91dc6a0d701fa852bb812e244b7a030/srt-3.5.3.tar.gz
90
+ - pypi: https://files.pythonhosted.org/packages/d9/61/f2b52e107b1fc8944b33ef56bf6ac4ebbe16d91b94d2b87ce013bf63fb84/starlette-0.45.3-py3-none-any.whl
91
+ - pypi: https://files.pythonhosted.org/packages/99/ff/c87e0622b1dadea79d2fb0b25ade9ed98954c9033722eb707053d310d4f3/sympy-1.13.3-py3-none-any.whl
92
+ - pypi: https://files.pythonhosted.org/packages/40/44/4a5f08c96eb108af5cb50b41f76142f0afa346dfa99d5296fe7202a11854/tabulate-0.9.0-py3-none-any.whl
93
+ - pypi: https://files.pythonhosted.org/packages/5c/51/51c3f2884d7bab89af25f678447ea7d297b53b5a3b5730a7cb2ef6069f07/tomli-2.2.1-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
94
+ - pypi: https://files.pythonhosted.org/packages/d0/30/dc54f88dd4a2b5dc8a0279bdd7270e735851848b762aeb1c1184ed1f6b14/tqdm-4.67.1-py3-none-any.whl
95
+ - pypi: https://files.pythonhosted.org/packages/26/9f/ad63fc0248c5379346306f8668cda6e2e2e9c95e01216d2b8ffd9ff037d0/typing_extensions-4.12.2-py3-none-any.whl
96
+ - pypi: https://files.pythonhosted.org/packages/c8/19/4ec628951a74043532ca2cf5d97b7b14863931476d117c471e8e2b1eb39f/urllib3-2.3.0-py3-none-any.whl
97
+ - pypi: https://files.pythonhosted.org/packages/61/14/33a3a1352cfa71812a3a21e8c9bfb83f60b0011f5e36f2b1399d51928209/uvicorn-0.34.0-py3-none-any.whl
98
+ - pypi: https://files.pythonhosted.org/packages/06/a7/b4e6a19925c900be9f98bec0a75e6e8f79bb53bdeb891916609ab3958967/uvloop-0.21.0-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
99
+ - pypi: https://files.pythonhosted.org/packages/2b/b4/9396cc61b948ef18943e7c85ecfa64cf940c88977d882da57147f62b34b1/watchfiles-1.0.4-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
100
+ - pypi: https://files.pythonhosted.org/packages/5a/84/44687a29792a70e111c5c477230a72c4b957d88d16141199bf9acb7537a3/websocket_client-1.8.0-py3-none-any.whl
101
+ - pypi: https://files.pythonhosted.org/packages/81/da/72f7caabd94652e6eb7e92ed2d3da818626e70b4f2b15a854ef60bf501ec/websockets-14.2-cp312-cp312-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl
102
+ - pypi: https://files.pythonhosted.org/packages/1a/e1/a097d5755d3ea8479a42856f51d97eeff7a3a7160593332d98f2709b3580/yarl-1.18.3-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
103
+ - pypi: .
104
+ win-64:
105
+ - conda: https://conda.anaconda.org/conda-forge/win-64/bzip2-1.0.8-h2466b09_7.conda
106
+ - conda: https://conda.anaconda.org/conda-forge/win-64/ca-certificates-2025.1.31-h56e8100_0.conda
107
+ - conda: https://conda.anaconda.org/conda-forge/noarch/cuda-version-11.8-h70ddcb2_3.conda
108
+ - conda: https://conda.anaconda.org/conda-forge/win-64/cudatoolkit-11.8.0-h09e9e62_13.conda
109
+ - conda: https://conda.anaconda.org/conda-forge/win-64/cudnn-8.9.7.29-he6de189_3.conda
110
+ - conda: https://conda.anaconda.org/conda-forge/win-64/libexpat-2.6.4-he0c23c2_0.conda
111
+ - conda: https://conda.anaconda.org/conda-forge/win-64/libffi-3.4.2-h8ffe710_5.tar.bz2
112
+ - conda: https://conda.anaconda.org/conda-forge/win-64/liblzma-5.6.4-h2466b09_0.conda
113
+ - conda: https://conda.anaconda.org/conda-forge/win-64/libsqlite-3.49.0-h67fdade_0.conda
114
+ - conda: https://conda.anaconda.org/conda-forge/win-64/libzlib-1.3.1-h2466b09_2.conda
115
+ - conda: https://conda.anaconda.org/conda-forge/win-64/libzlib-wapi-1.2.13-h2466b09_6.conda
116
+ - conda: https://conda.anaconda.org/conda-forge/win-64/openssl-3.4.0-ha4e3fda_1.conda
117
+ - conda: https://conda.anaconda.org/conda-forge/win-64/python-3.12.8-h3f84c4b_1_cpython.conda
118
+ - conda: https://conda.anaconda.org/conda-forge/win-64/tk-8.6.13-h5226925_1.conda
119
+ - conda: https://conda.anaconda.org/conda-forge/noarch/tzdata-2025a-h78e105d_0.conda
120
+ - conda: https://conda.anaconda.org/conda-forge/win-64/ucrt-10.0.22621.0-h57928b3_1.conda
121
+ - conda: https://conda.anaconda.org/conda-forge/win-64/vc-14.3-h5fd82a7_24.conda
122
+ - conda: https://conda.anaconda.org/conda-forge/win-64/vc14_runtime-14.42.34433-h6356254_24.conda
123
+ - conda: https://conda.anaconda.org/conda-forge/win-64/vs2015_runtime-14.42.34433-hfef2bbc_24.conda
124
+ - pypi: https://files.pythonhosted.org/packages/44/4c/03fb05f56551828ec67ceb3665e5dc51638042d204983a03b0a1541475b6/aiohappyeyeballs-2.4.6-py3-none-any.whl
125
+ - pypi: https://files.pythonhosted.org/packages/3d/63/5eca549d34d141bcd9de50d4e59b913f3641559460c739d5e215693cb54a/aiohttp-3.11.12-cp312-cp312-win_amd64.whl
126
+ - pypi: https://files.pythonhosted.org/packages/ec/6a/bc7e17a3e87a2985d3e8f4da4cd0f481060eb78fb08596c42be62c90a4d9/aiosignal-1.3.2-py2.py3-none-any.whl
127
+ - pypi: https://files.pythonhosted.org/packages/78/b6/6307fbef88d9b5ee7421e68d78a9f162e0da4900bc5f5793f6d3d0e34fb8/annotated_types-0.7.0-py3-none-any.whl
128
+ - pypi: https://files.pythonhosted.org/packages/74/86/e81814e542d1eaeec84d2312bec93a99b9ef1d78d9bfae1fc5dd74abdf15/anthropic-0.45.2-py3-none-any.whl
129
+ - pypi: https://files.pythonhosted.org/packages/46/eb/e7f063ad1fec6b3178a3cd82d1a3c4de82cccf283fc42746168188e1cdd5/anyio-4.8.0-py3-none-any.whl
130
+ - pypi: https://files.pythonhosted.org/packages/fc/30/d4986a882011f9df997a55e6becd864812ccfcd821d64aac8570ee39f719/attrs-25.1.0-py3-none-any.whl
131
+ - pypi: https://files.pythonhosted.org/packages/52/bb/ef7a29f5717cca646be6698d80e542446a6a442be897c8f67bf93551c672/azure_cognitiveservices_speech-1.42.0-py3-none-win_amd64.whl
132
+ - pypi: https://files.pythonhosted.org/packages/38/fc/bce832fd4fd99766c04d1ee0eead6b0ec6486fb100ae5e74c1d91292b982/certifi-2025.1.31-py3-none-any.whl
133
+ - pypi: https://files.pythonhosted.org/packages/50/b9/db34c4755a7bd1cb2d1603ac3863f22bcecbd1ba29e5ee841a4bc510b294/cffi-1.17.1-cp312-cp312-win_amd64.whl
134
+ - pypi: https://files.pythonhosted.org/packages/38/6f/f5fbc992a329ee4e0f288c1fe0e2ad9485ed064cac731ed2fe47dcc38cbf/chardet-5.2.0-py3-none-any.whl
135
+ - pypi: https://files.pythonhosted.org/packages/21/5b/1b390b03b1d16c7e382b561c5329f83cc06623916aab983e8ab9239c7d5c/charset_normalizer-3.4.1-cp312-cp312-win_amd64.whl
136
+ - pypi: https://files.pythonhosted.org/packages/7e/d4/7ebdbd03970677812aac39c869717059dbb71a4cfc033ca6e5221787892c/click-8.1.8-py3-none-any.whl
137
+ - pypi: https://files.pythonhosted.org/packages/d1/d6/3965ed04c63042e047cb6a3e6ed1a63a35087b6a609aa3a15ed8ac56c221/colorama-0.4.6-py2.py3-none-any.whl
138
+ - pypi: https://files.pythonhosted.org/packages/a7/06/3d6badcf13db419e25b07041d9c7b4a2c331d3f4e7134445ec5df57714cd/coloredlogs-15.0.1-py2.py3-none-any.whl
139
+ - pypi: https://files.pythonhosted.org/packages/4c/44/72009bb0a0d8286f6408c9cb70552350e21e9c280bfa1ef30784b30dfc0f/comtypes-1.4.10-py3-none-any.whl
140
+ - pypi: https://files.pythonhosted.org/packages/12/b3/231ffd4ab1fc9d679809f356cebee130ac7daa00d6d6f3206dd4fd137e9e/distro-1.9.0-py3-none-any.whl
141
+ - pypi: https://files.pythonhosted.org/packages/f8/37/00c211f1021f9b04dde72dcbee72ce66248519c3899a47b06f8940a67c08/edge_tts-7.0.0-py3-none-any.whl
142
+ - pypi: https://files.pythonhosted.org/packages/8f/7d/2d6ce181d7a5f51dedb8c06206cbf0ec026a99bf145edd309f9e17c3282f/fastapi-0.115.8-py3-none-any.whl
143
+ - pypi: https://files.pythonhosted.org/packages/0e/e2/b066e6e02d67bf5261a6d7539648c6da3365cc9eff3eb6d82009595d84d9/flatbuffers-25.1.24-py2.py3-none-any.whl
144
+ - pypi: https://files.pythonhosted.org/packages/b1/56/4e45136ffc6bdbfa68c29ca56ef53783ef4c2fd395f7cbf99a2624aa9aaa/frozenlist-1.5.0-cp312-cp312-win_amd64.whl
145
+ - pypi: https://files.pythonhosted.org/packages/b0/6c/5a53d632b44ef7655ac8d9b34432e13160917f9307c94b1467efd34e336e/groq-0.18.0-py3-none-any.whl
146
+ - pypi: https://files.pythonhosted.org/packages/95/04/ff642e65ad6b90db43e668d70ffb6736436c7ce41fcc549f4e9472234127/h11-0.14.0-py3-none-any.whl
147
+ - pypi: https://files.pythonhosted.org/packages/87/f5/72347bc88306acb359581ac4d52f23c0ef445b57157adedb9aee0cd689d2/httpcore-1.0.7-py3-none-any.whl
148
+ - pypi: https://files.pythonhosted.org/packages/12/b7/5cae71a8868e555f3f67a50ee7f673ce36eac970f029c0c5e9d584352961/httptools-0.6.4-cp312-cp312-win_amd64.whl
149
+ - pypi: https://files.pythonhosted.org/packages/2a/39/e50c7c3a983047577ee07d2a9e53faf5a69493943ec3f6a384bdc792deb2/httpx-0.28.1-py3-none-any.whl
150
+ - pypi: https://files.pythonhosted.org/packages/f0/0f/310fb31e39e2d734ccaa2c0fb981ee41f7bd5056ce9bc29b2248bd569169/humanfriendly-10.0-py2.py3-none-any.whl
151
+ - pypi: https://files.pythonhosted.org/packages/76/c6/c88e154df9c4e1a2a66ccf0005a88dfb2650c1dffb6f5ce603dfbd452ce3/idna-3.10-py3-none-any.whl
152
+ - pypi: https://files.pythonhosted.org/packages/41/69/6d4bbe66b3b3b4507e47aa1dd5d075919ad242b4b1115b3f80eecd443687/jiter-0.8.2-cp312-cp312-win_amd64.whl
153
+ - pypi: https://files.pythonhosted.org/packages/0e/72/a3add0e4eec4eb9e2569554f7c70f4a3c27712f40e3284d483e88094cc0e/langdetect-1.0.9.tar.gz
154
+ - pypi: https://files.pythonhosted.org/packages/0c/29/0348de65b8cc732daa3e33e67806420b2ae89bdce2b04af740289c5c6c8c/loguru-0.7.3-py3-none-any.whl
155
+ - pypi: https://files.pythonhosted.org/packages/43/e3/7d92a15f894aa0c9c4b49b8ee9ac9850d6e63b03c9c32c0367a13ae62209/mpmath-1.3.0-py3-none-any.whl
156
+ - pypi: https://files.pythonhosted.org/packages/a3/bf/f332a13486b1ed0496d624bcc7e8357bb8053823e8cd4b9a18edc1d97e73/multidict-6.1.0-cp312-cp312-win_amd64.whl
157
+ - pypi: https://files.pythonhosted.org/packages/16/2e/86f24451c2d530c88daf997cb8d6ac622c1d40d19f5a031ed68a4b73a374/numpy-1.26.4-cp312-cp312-win_amd64.whl
158
+ - pypi: https://files.pythonhosted.org/packages/dd/80/76979e0b744307d488c79e41051117634b956612cc731f1028eb17ee7294/onnxruntime-1.20.1-cp312-cp312-win_amd64.whl
159
+ - pypi: https://files.pythonhosted.org/packages/9a/b6/2e2a011b2dc27a6711376808b4cd8c922c476ea0f1420b39892117fa8563/openai-1.61.1-py3-none-any.whl
160
+ - pypi: https://files.pythonhosted.org/packages/88/ef/eb23f262cca3c0c4eb7ab1933c3b1f03d021f2c48f54763065b6f0e321be/packaging-24.2-py3-none-any.whl
161
+ - pypi: https://files.pythonhosted.org/packages/3b/77/a92c3ef994e47180862b9d7d11e37624fb1c00a16d61faf55115d970628b/propcache-0.2.1-cp312-cp312-win_amd64.whl
162
+ - pypi: https://files.pythonhosted.org/packages/61/fa/aae8e10512b83de633f2646506a6d835b151edf4b30d18d73afd01447253/protobuf-5.29.3-cp310-abi3-win_amd64.whl
163
+ - pypi: https://files.pythonhosted.org/packages/13/a3/a812df4e2dd5696d1f351d58b8fe16a405b234ad2886a0dab9183fb78109/pycparser-2.22-py3-none-any.whl
164
+ - pypi: https://files.pythonhosted.org/packages/f4/3c/8cc1cc84deffa6e25d2d0c688ebb80635dfdbf1dbea3e30c541c8cf4d860/pydantic-2.10.6-py3-none-any.whl
165
+ - pypi: https://files.pythonhosted.org/packages/1f/ea/cd7209a889163b8dcca139fe32b9687dd05249161a3edda62860430457a5/pydantic_core-2.27.2-cp312-cp312-win_amd64.whl
166
+ - pypi: https://files.pythonhosted.org/packages/a6/53/d78dc063216e62fc55f6b2eebb447f6a4b0a59f55c8406376f76bf959b08/pydub-0.25.1-py2.py3-none-any.whl
167
+ - pypi: https://files.pythonhosted.org/packages/d0/1b/2f292bbd742e369a100c91faa0483172cd91a1a422a6692055ac920946c5/pypiwin32-223-py3-none-any.whl
168
+ - pypi: https://files.pythonhosted.org/packages/5a/dc/491b7661614ab97483abf2056be1deee4dc2490ecbf7bff9ab5cdbac86e1/pyreadline3-3.5.4-py3-none-any.whl
169
+ - pypi: https://files.pythonhosted.org/packages/48/0a/c99fb7d7e176f8b176ef19704a32e6a9c6aafdf19ef75a187f701fc15801/pysbd-0.3.4-py3-none-any.whl
170
+ - pypi: https://files.pythonhosted.org/packages/6a/3e/b68c118422ec867fa7ab88444e1274aa40681c606d59ac27de5a5588f082/python_dotenv-1.0.1-py3-none-any.whl
171
+ - pypi: https://files.pythonhosted.org/packages/94/df/e1584757c736c4fba09a3fb4f22fe625cc3367b06c6ece221e4b8c1e3023/pyttsx3-2.98-py3-none-any.whl
172
+ - pypi: https://files.pythonhosted.org/packages/21/27/0c8811fbc3ca188f93b5354e7c286eb91f80a53afa4e11007ef661afa746/pywin32-308-cp312-cp312-win_amd64.whl
173
+ - pypi: https://files.pythonhosted.org/packages/0c/e8/4f648c598b17c3d06e8753d7d13d57542b30d56e6c2dedf9c331ae56312e/PyYAML-6.0.2-cp312-cp312-win_amd64.whl
174
+ - pypi: https://files.pythonhosted.org/packages/f9/9b/335f9764261e915ed497fcdeb11df5dfd6f7bf257d4a6a2a686d80da4d54/requests-2.32.3-py3-none-any.whl
175
+ - pypi: https://files.pythonhosted.org/packages/b7/ad/c7a900591bd152bb47fc4882a27654ea55c7973e6d5d6396298ad3fd6638/ruff-0.9.5-py3-none-win_amd64.whl
176
+ - pypi: https://files.pythonhosted.org/packages/ff/ba/31c7a8131152822b3a2cdeba76398ffb404d81d640de98287d236da90c49/scipy-1.15.1-cp312-cp312-win_amd64.whl
177
+ - pypi: https://files.pythonhosted.org/packages/32/7a/1e9a31a5d07d1d3ed53f9cca128133f52fb898cc49196fe0a66a0b056c2d/sherpa_onnx-1.10.43-cp312-cp312-win_amd64.whl
178
+ - pypi: https://files.pythonhosted.org/packages/b7/ce/149a00dd41f10bc29e5921b496af8b574d8413afcd5e30dfa0ed46c2cc5e/six-1.17.0-py2.py3-none-any.whl
179
+ - pypi: https://files.pythonhosted.org/packages/e9/44/75a9c9421471a6c4805dbf2356f7c181a29c1879239abab1ea2cc8f38b40/sniffio-1.3.1-py3-none-any.whl
180
+ - pypi: https://files.pythonhosted.org/packages/14/e9/6b761de83277f2f02ded7e7ea6f07828ec78e4b229b80e4ca55dd205b9dc/soundfile-0.13.1-py2.py3-none-win_amd64.whl
181
+ - pypi: https://files.pythonhosted.org/packages/66/b7/4a1bc231e0681ebf339337b0cd05b91dc6a0d701fa852bb812e244b7a030/srt-3.5.3.tar.gz
182
+ - pypi: https://files.pythonhosted.org/packages/d9/61/f2b52e107b1fc8944b33ef56bf6ac4ebbe16d91b94d2b87ce013bf63fb84/starlette-0.45.3-py3-none-any.whl
183
+ - pypi: https://files.pythonhosted.org/packages/99/ff/c87e0622b1dadea79d2fb0b25ade9ed98954c9033722eb707053d310d4f3/sympy-1.13.3-py3-none-any.whl
184
+ - pypi: https://files.pythonhosted.org/packages/40/44/4a5f08c96eb108af5cb50b41f76142f0afa346dfa99d5296fe7202a11854/tabulate-0.9.0-py3-none-any.whl
185
+ - pypi: https://files.pythonhosted.org/packages/ef/60/9b9638f081c6f1261e2688bd487625cd1e660d0a85bd469e91d8db969734/tomli-2.2.1-cp312-cp312-win_amd64.whl
186
+ - pypi: https://files.pythonhosted.org/packages/d0/30/dc54f88dd4a2b5dc8a0279bdd7270e735851848b762aeb1c1184ed1f6b14/tqdm-4.67.1-py3-none-any.whl
187
+ - pypi: https://files.pythonhosted.org/packages/26/9f/ad63fc0248c5379346306f8668cda6e2e2e9c95e01216d2b8ffd9ff037d0/typing_extensions-4.12.2-py3-none-any.whl
188
+ - pypi: https://files.pythonhosted.org/packages/c8/19/4ec628951a74043532ca2cf5d97b7b14863931476d117c471e8e2b1eb39f/urllib3-2.3.0-py3-none-any.whl
189
+ - pypi: https://files.pythonhosted.org/packages/61/14/33a3a1352cfa71812a3a21e8c9bfb83f60b0011f5e36f2b1399d51928209/uvicorn-0.34.0-py3-none-any.whl
190
+ - pypi: https://files.pythonhosted.org/packages/ea/94/b0165481bff99a64b29e46e07ac2e0df9f7a957ef13bec4ceab8515f44e3/watchfiles-1.0.4-cp312-cp312-win_amd64.whl
191
+ - pypi: https://files.pythonhosted.org/packages/5a/84/44687a29792a70e111c5c477230a72c4b957d88d16141199bf9acb7537a3/websocket_client-1.8.0-py3-none-any.whl
192
+ - pypi: https://files.pythonhosted.org/packages/b3/7d/32cdb77990b3bdc34a306e0a0f73a1275221e9a66d869f6ff833c95b56ef/websockets-14.2-cp312-cp312-win_amd64.whl
193
+ - pypi: https://files.pythonhosted.org/packages/e1/07/c6fe3ad3e685340704d314d765b7912993bcb8dc198f0e7a89382d37974b/win32_setctime-1.2.0-py3-none-any.whl
194
+ - pypi: https://files.pythonhosted.org/packages/34/45/0e055320daaabfc169b21ff6174567b2c910c45617b0d79c68d7ab349b02/yarl-1.18.3-cp312-cp312-win_amd64.whl
195
+ - pypi: .
196
+ packages:
197
+ - conda: https://conda.anaconda.org/conda-forge/linux-64/_libgcc_mutex-0.1-conda_forge.tar.bz2
198
+ sha256: fe51de6107f9edc7aa4f786a70f4a883943bc9d39b3bb7307c04c41410990726
199
+ md5: d7c89558ba9fa0495403155b64376d81
200
+ license: None
201
+ purls: []
202
+ size: 2562
203
+ timestamp: 1578324546067
204
+ - conda: https://conda.anaconda.org/conda-forge/linux-64/_openmp_mutex-4.5-2_gnu.tar.bz2
205
+ build_number: 16
206
+ sha256: fbe2c5e56a653bebb982eda4876a9178aedfc2b545f25d0ce9c4c0b508253d22
207
+ md5: 73aaf86a425cc6e73fcf236a5a46396d
208
+ depends:
209
+ - _libgcc_mutex 0.1 conda_forge
210
+ - libgomp >=7.5.0
211
+ constrains:
212
+ - openmp_impl 9999
213
+ license: BSD-3-Clause
214
+ license_family: BSD
215
+ purls: []
216
+ size: 23621
217
+ timestamp: 1650670423406
218
+ - pypi: https://files.pythonhosted.org/packages/44/4c/03fb05f56551828ec67ceb3665e5dc51638042d204983a03b0a1541475b6/aiohappyeyeballs-2.4.6-py3-none-any.whl
219
+ name: aiohappyeyeballs
220
+ version: 2.4.6
221
+ sha256: 147ec992cf873d74f5062644332c539fcd42956dc69453fe5204195e560517e1
222
+ requires_python: '>=3.9'
223
+ - pypi: https://files.pythonhosted.org/packages/17/e2/9f744cee0861af673dc271a3351f59ebd5415928e20080ab85be25641471/aiohttp-3.11.12-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
224
+ name: aiohttp
225
+ version: 3.11.12
226
+ sha256: 6dfe7f984f28a8ae94ff3a7953cd9678550dbd2a1f9bda5dd9c5ae627744c78e
227
+ requires_dist:
228
+ - aiohappyeyeballs>=2.3.0
229
+ - aiosignal>=1.1.2
230
+ - async-timeout>=4.0,<6.0 ; python_full_version < '3.11'
231
+ - attrs>=17.3.0
232
+ - frozenlist>=1.1.1
233
+ - multidict>=4.5,<7.0
234
+ - propcache>=0.2.0
235
+ - yarl>=1.17.0,<2.0
236
+ - aiodns>=3.2.0 ; (sys_platform == 'darwin' and extra == 'speedups') or (sys_platform == 'linux' and extra == 'speedups')
237
+ - brotli ; platform_python_implementation == 'CPython' and extra == 'speedups'
238
+ - brotlicffi ; platform_python_implementation != 'CPython' and extra == 'speedups'
239
+ requires_python: '>=3.9'
240
+ - pypi: https://files.pythonhosted.org/packages/3d/63/5eca549d34d141bcd9de50d4e59b913f3641559460c739d5e215693cb54a/aiohttp-3.11.12-cp312-cp312-win_amd64.whl
241
+ name: aiohttp
242
+ version: 3.11.12
243
+ sha256: 54775858c7f2f214476773ce785a19ee81d1294a6bedc5cc17225355aab74802
244
+ requires_dist:
245
+ - aiohappyeyeballs>=2.3.0
246
+ - aiosignal>=1.1.2
247
+ - async-timeout>=4.0,<6.0 ; python_full_version < '3.11'
248
+ - attrs>=17.3.0
249
+ - frozenlist>=1.1.1
250
+ - multidict>=4.5,<7.0
251
+ - propcache>=0.2.0
252
+ - yarl>=1.17.0,<2.0
253
+ - aiodns>=3.2.0 ; (sys_platform == 'darwin' and extra == 'speedups') or (sys_platform == 'linux' and extra == 'speedups')
254
+ - brotli ; platform_python_implementation == 'CPython' and extra == 'speedups'
255
+ - brotlicffi ; platform_python_implementation != 'CPython' and extra == 'speedups'
256
+ requires_python: '>=3.9'
257
+ - pypi: https://files.pythonhosted.org/packages/ec/6a/bc7e17a3e87a2985d3e8f4da4cd0f481060eb78fb08596c42be62c90a4d9/aiosignal-1.3.2-py2.py3-none-any.whl
258
+ name: aiosignal
259
+ version: 1.3.2
260
+ sha256: 45cde58e409a301715980c2b01d0c28bdde3770d8290b5eb2173759d9acb31a5
261
+ requires_dist:
262
+ - frozenlist>=1.1.0
263
+ requires_python: '>=3.9'
264
+ - pypi: https://files.pythonhosted.org/packages/78/b6/6307fbef88d9b5ee7421e68d78a9f162e0da4900bc5f5793f6d3d0e34fb8/annotated_types-0.7.0-py3-none-any.whl
265
+ name: annotated-types
266
+ version: 0.7.0
267
+ sha256: 1f02e8b43a8fbbc3f3e0d4f0f4bfc8131bcb4eebe8849b8e5c773f3a1c582a53
268
+ requires_dist:
269
+ - typing-extensions>=4.0.0 ; python_full_version < '3.9'
270
+ requires_python: '>=3.8'
271
+ - pypi: https://files.pythonhosted.org/packages/74/86/e81814e542d1eaeec84d2312bec93a99b9ef1d78d9bfae1fc5dd74abdf15/anthropic-0.45.2-py3-none-any.whl
272
+ name: anthropic
273
+ version: 0.45.2
274
+ sha256: ecd746f7274451dfcb7e1180571ead624c7e1195d1d46cb7c70143d2aedb4d35
275
+ requires_dist:
276
+ - anyio>=3.5.0,<5
277
+ - distro>=1.7.0,<2
278
+ - httpx>=0.23.0,<1
279
+ - jiter>=0.4.0,<1
280
+ - pydantic>=1.9.0,<3
281
+ - sniffio
282
+ - typing-extensions>=4.10,<5
283
+ - boto3>=1.28.57 ; extra == 'bedrock'
284
+ - botocore>=1.31.57 ; extra == 'bedrock'
285
+ - google-auth>=2,<3 ; extra == 'vertex'
286
+ requires_python: '>=3.8'
287
+ - pypi: https://files.pythonhosted.org/packages/46/eb/e7f063ad1fec6b3178a3cd82d1a3c4de82cccf283fc42746168188e1cdd5/anyio-4.8.0-py3-none-any.whl
288
+ name: anyio
289
+ version: 4.8.0
290
+ sha256: b5011f270ab5eb0abf13385f851315585cc37ef330dd88e27ec3d34d651fd47a
291
+ requires_dist:
292
+ - exceptiongroup>=1.0.2 ; python_full_version < '3.11'
293
+ - idna>=2.8
294
+ - sniffio>=1.1
295
+ - typing-extensions>=4.5 ; python_full_version < '3.13'
296
+ - trio>=0.26.1 ; extra == 'trio'
297
+ - anyio[trio] ; extra == 'test'
298
+ - coverage[toml]>=7 ; extra == 'test'
299
+ - exceptiongroup>=1.2.0 ; extra == 'test'
300
+ - hypothesis>=4.0 ; extra == 'test'
301
+ - psutil>=5.9 ; extra == 'test'
302
+ - pytest>=7.0 ; extra == 'test'
303
+ - trustme ; extra == 'test'
304
+ - truststore>=0.9.1 ; python_full_version >= '3.10' and extra == 'test'
305
+ - uvloop>=0.21 ; python_full_version < '3.14' and platform_python_implementation == 'CPython' and platform_system != 'Windows' and extra == 'test'
306
+ - packaging ; extra == 'doc'
307
+ - sphinx~=7.4 ; extra == 'doc'
308
+ - sphinx-rtd-theme ; extra == 'doc'
309
+ - sphinx-autodoc-typehints>=1.2.0 ; extra == 'doc'
310
+ requires_python: '>=3.9'
311
+ - pypi: https://files.pythonhosted.org/packages/fc/30/d4986a882011f9df997a55e6becd864812ccfcd821d64aac8570ee39f719/attrs-25.1.0-py3-none-any.whl
312
+ name: attrs
313
+ version: 25.1.0
314
+ sha256: c75a69e28a550a7e93789579c22aa26b0f5b83b75dc4e08fe092980051e1090a
315
+ requires_dist:
316
+ - cloudpickle ; platform_python_implementation == 'CPython' and extra == 'benchmark'
317
+ - hypothesis ; extra == 'benchmark'
318
+ - mypy>=1.11.1 ; python_full_version >= '3.10' and platform_python_implementation == 'CPython' and extra == 'benchmark'
319
+ - pympler ; extra == 'benchmark'
320
+ - pytest-codspeed ; extra == 'benchmark'
321
+ - pytest-mypy-plugins ; python_full_version >= '3.10' and platform_python_implementation == 'CPython' and extra == 'benchmark'
322
+ - pytest-xdist[psutil] ; extra == 'benchmark'
323
+ - pytest>=4.3.0 ; extra == 'benchmark'
324
+ - cloudpickle ; platform_python_implementation == 'CPython' and extra == 'cov'
325
+ - coverage[toml]>=5.3 ; extra == 'cov'
326
+ - hypothesis ; extra == 'cov'
327
+ - mypy>=1.11.1 ; python_full_version >= '3.10' and platform_python_implementation == 'CPython' and extra == 'cov'
328
+ - pympler ; extra == 'cov'
329
+ - pytest-mypy-plugins ; python_full_version >= '3.10' and platform_python_implementation == 'CPython' and extra == 'cov'
330
+ - pytest-xdist[psutil] ; extra == 'cov'
331
+ - pytest>=4.3.0 ; extra == 'cov'
332
+ - cloudpickle ; platform_python_implementation == 'CPython' and extra == 'dev'
333
+ - hypothesis ; extra == 'dev'
334
+ - mypy>=1.11.1 ; python_full_version >= '3.10' and platform_python_implementation == 'CPython' and extra == 'dev'
335
+ - pre-commit-uv ; extra == 'dev'
336
+ - pympler ; extra == 'dev'
337
+ - pytest-mypy-plugins ; python_full_version >= '3.10' and platform_python_implementation == 'CPython' and extra == 'dev'
338
+ - pytest-xdist[psutil] ; extra == 'dev'
339
+ - pytest>=4.3.0 ; extra == 'dev'
340
+ - cogapp ; extra == 'docs'
341
+ - furo ; extra == 'docs'
342
+ - myst-parser ; extra == 'docs'
343
+ - sphinx ; extra == 'docs'
344
+ - sphinx-notfound-page ; extra == 'docs'
345
+ - sphinxcontrib-towncrier ; extra == 'docs'
346
+ - towncrier<24.7 ; extra == 'docs'
347
+ - cloudpickle ; platform_python_implementation == 'CPython' and extra == 'tests'
348
+ - hypothesis ; extra == 'tests'
349
+ - mypy>=1.11.1 ; python_full_version >= '3.10' and platform_python_implementation == 'CPython' and extra == 'tests'
350
+ - pympler ; extra == 'tests'
351
+ - pytest-mypy-plugins ; python_full_version >= '3.10' and platform_python_implementation == 'CPython' and extra == 'tests'
352
+ - pytest-xdist[psutil] ; extra == 'tests'
353
+ - pytest>=4.3.0 ; extra == 'tests'
354
+ - mypy>=1.11.1 ; python_full_version >= '3.10' and platform_python_implementation == 'CPython' and extra == 'tests-mypy'
355
+ - pytest-mypy-plugins ; python_full_version >= '3.10' and platform_python_implementation == 'CPython' and extra == 'tests-mypy'
356
+ requires_python: '>=3.8'
357
+ - pypi: https://files.pythonhosted.org/packages/52/bb/ef7a29f5717cca646be6698d80e542446a6a442be897c8f67bf93551c672/azure_cognitiveservices_speech-1.42.0-py3-none-win_amd64.whl
358
+ name: azure-cognitiveservices-speech
359
+ version: 1.42.0
360
+ sha256: 32076ee03b3b402a2e8841f2c21e5cd54dc3ffbf5af183426344727298c8bbd4
361
+ requires_python: '>=3.7'
362
+ - pypi: https://files.pythonhosted.org/packages/83/f7/9241ad7154e554730ea56271e14ad1115c278b26a81eb892eac16fabb480/azure_cognitiveservices_speech-1.42.0-py3-none-manylinux1_x86_64.whl
363
+ name: azure-cognitiveservices-speech
364
+ version: 1.42.0
365
+ sha256: 90890a147499239f37b0b1a5112c51820b90fa2b5adafa0df4da6cc0c211887f
366
+ requires_python: '>=3.7'
367
+ - conda: https://conda.anaconda.org/conda-forge/linux-64/bzip2-1.0.8-h4bc722e_7.conda
368
+ sha256: 5ced96500d945fb286c9c838e54fa759aa04a7129c59800f0846b4335cee770d
369
+ md5: 62ee74e96c5ebb0af99386de58cf9553
370
+ depends:
371
+ - __glibc >=2.17,<3.0.a0
372
+ - libgcc-ng >=12
373
+ license: bzip2-1.0.6
374
+ license_family: BSD
375
+ purls: []
376
+ size: 252783
377
+ timestamp: 1720974456583
378
+ - conda: https://conda.anaconda.org/conda-forge/win-64/bzip2-1.0.8-h2466b09_7.conda
379
+ sha256: 35a5dad92e88fdd7fc405e864ec239486f4f31eec229e31686e61a140a8e573b
380
+ md5: 276e7ffe9ffe39688abc665ef0f45596
381
+ depends:
382
+ - ucrt >=10.0.20348.0
383
+ - vc >=14.2,<15
384
+ - vc14_runtime >=14.29.30139
385
+ license: bzip2-1.0.6
386
+ license_family: BSD
387
+ purls: []
388
+ size: 54927
389
+ timestamp: 1720974860185
390
+ - conda: https://conda.anaconda.org/conda-forge/linux-64/ca-certificates-2025.1.31-hbcca054_0.conda
391
+ sha256: bf832198976d559ab44d6cdb315642655547e26d826e34da67cbee6624cda189
392
+ md5: 19f3a56f68d2fd06c516076bff482c52
393
+ license: ISC
394
+ purls: []
395
+ size: 158144
396
+ timestamp: 1738298224464
397
+ - conda: https://conda.anaconda.org/conda-forge/win-64/ca-certificates-2025.1.31-h56e8100_0.conda
398
+ sha256: 1bedccdf25a3bd782d6b0e57ddd97cdcda5501716009f2de4479a779221df155
399
+ md5: 5304a31607974dfc2110dfbb662ed092
400
+ license: ISC
401
+ purls: []
402
+ size: 158690
403
+ timestamp: 1738298232550
404
+ - pypi: https://files.pythonhosted.org/packages/38/fc/bce832fd4fd99766c04d1ee0eead6b0ec6486fb100ae5e74c1d91292b982/certifi-2025.1.31-py3-none-any.whl
405
+ name: certifi
406
+ version: 2025.1.31
407
+ sha256: ca78db4565a652026a4db2bcdf68f2fb589ea80d0be70e03929ed730746b84fe
408
+ requires_python: '>=3.6'
409
+ - pypi: https://files.pythonhosted.org/packages/50/b9/db34c4755a7bd1cb2d1603ac3863f22bcecbd1ba29e5ee841a4bc510b294/cffi-1.17.1-cp312-cp312-win_amd64.whl
410
+ name: cffi
411
+ version: 1.17.1
412
+ sha256: 51392eae71afec0d0c8fb1a53b204dbb3bcabcb3c9b807eedf3e1e6ccf2de903
413
+ requires_dist:
414
+ - pycparser
415
+ requires_python: '>=3.8'
416
+ - pypi: https://files.pythonhosted.org/packages/b2/d5/da47df7004cb17e4955df6a43d14b3b4ae77737dff8bf7f8f333196717bf/cffi-1.17.1-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
417
+ name: cffi
418
+ version: 1.17.1
419
+ sha256: b62ce867176a75d03a665bad002af8e6d54644fad99a3c70905c543130e39d93
420
+ requires_dist:
421
+ - pycparser
422
+ requires_python: '>=3.8'
423
+ - pypi: https://files.pythonhosted.org/packages/38/6f/f5fbc992a329ee4e0f288c1fe0e2ad9485ed064cac731ed2fe47dcc38cbf/chardet-5.2.0-py3-none-any.whl
424
+ name: chardet
425
+ version: 5.2.0
426
+ sha256: e1cf59446890a00105fe7b7912492ea04b6e6f06d4b742b2c788469e34c82970
427
+ requires_python: '>=3.7'
428
+ - pypi: https://files.pythonhosted.org/packages/21/5b/1b390b03b1d16c7e382b561c5329f83cc06623916aab983e8ab9239c7d5c/charset_normalizer-3.4.1-cp312-cp312-win_amd64.whl
429
+ name: charset-normalizer
430
+ version: 3.4.1
431
+ sha256: 6ff8a4a60c227ad87030d76e99cd1698345d4491638dfa6673027c48b3cd395f
432
+ requires_python: '>=3.7'
433
+ - pypi: https://files.pythonhosted.org/packages/3e/a2/513f6cbe752421f16d969e32f3583762bfd583848b763913ddab8d9bfd4f/charset_normalizer-3.4.1-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
434
+ name: charset-normalizer
435
+ version: 3.4.1
436
+ sha256: bc2722592d8998c870fa4e290c2eec2c1569b87fe58618e67d38b4665dfa680d
437
+ requires_python: '>=3.7'
438
+ - pypi: https://files.pythonhosted.org/packages/7e/d4/7ebdbd03970677812aac39c869717059dbb71a4cfc033ca6e5221787892c/click-8.1.8-py3-none-any.whl
439
+ name: click
440
+ version: 8.1.8
441
+ sha256: 63c132bbbed01578a06712a2d1f497bb62d9c1c0d329b7903a866228027263b2
442
+ requires_dist:
443
+ - colorama ; platform_system == 'Windows'
444
+ - importlib-metadata ; python_full_version < '3.8'
445
+ requires_python: '>=3.7'
446
+ - pypi: https://files.pythonhosted.org/packages/d1/d6/3965ed04c63042e047cb6a3e6ed1a63a35087b6a609aa3a15ed8ac56c221/colorama-0.4.6-py2.py3-none-any.whl
447
+ name: colorama
448
+ version: 0.4.6
449
+ sha256: 4f1d9991f5acc0ca119f9d443620b77f9d6b33703e51011c16baf57afb285fc6
450
+ requires_python: '>=2.7,!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*,!=3.4.*,!=3.5.*,!=3.6.*'
451
+ - pypi: https://files.pythonhosted.org/packages/a7/06/3d6badcf13db419e25b07041d9c7b4a2c331d3f4e7134445ec5df57714cd/coloredlogs-15.0.1-py2.py3-none-any.whl
452
+ name: coloredlogs
453
+ version: 15.0.1
454
+ sha256: 612ee75c546f53e92e70049c9dbfcc18c935a2b9a53b66085ce9ef6a6e5c0934
455
+ requires_dist:
456
+ - humanfriendly>=9.1
457
+ - capturer>=2.4 ; extra == 'cron'
458
+ requires_python: '>=2.7,!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*,!=3.4.*'
459
+ - pypi: https://files.pythonhosted.org/packages/4c/44/72009bb0a0d8286f6408c9cb70552350e21e9c280bfa1ef30784b30dfc0f/comtypes-1.4.10-py3-none-any.whl
460
+ name: comtypes
461
+ version: 1.4.10
462
+ sha256: e078555721ee7ab40648a3363697d420b845b323e5944b55846e96aff97d2534
463
+ requires_python: '>=3.8'
464
+ - conda: https://conda.anaconda.org/conda-forge/noarch/cuda-version-11.8-h70ddcb2_3.conda
465
+ sha256: 53e0ffc14ea2f2b8c12320fd2aa38b01112763eba851336ff5953b436ae61259
466
+ md5: 670f0e1593b8c1d84f57ad5fe5256799
467
+ constrains:
468
+ - cudatoolkit 11.8|11.8.*
469
+ - __cuda >=11
470
+ license: LicenseRef-NVIDIA-End-User-License-Agreement
471
+ purls: []
472
+ size: 21043
473
+ timestamp: 1709765911943
474
+ - conda: https://conda.anaconda.org/conda-forge/linux-64/cudatoolkit-11.8.0-h4ba93d1_13.conda
475
+ sha256: 1797bacaf5350f272413c7f50787c01aef0e8eb955df0f0db144b10be2819752
476
+ md5: eb43f5f1f16e2fad2eba22219c3e499b
477
+ depends:
478
+ - __glibc >=2.17,<3.0.a0
479
+ - libgcc-ng >=12
480
+ - libstdcxx-ng >=12
481
+ constrains:
482
+ - __cuda >=11
483
+ license: LicenseRef-NVIDIA-End-User-License-Agreement
484
+ purls: []
485
+ size: 715605660
486
+ timestamp: 1706881738892
487
+ - conda: https://conda.anaconda.org/conda-forge/win-64/cudatoolkit-11.8.0-h09e9e62_13.conda
488
+ sha256: 45491dddc59d4ae8abba3640056da3c3a81b93e87a5b56f336f5ffabf58d14b3
489
+ md5: 56d440fefc5a01e631bbdb9e1f1701ad
490
+ depends:
491
+ - ucrt >=10.0.20348.0
492
+ - vc >=14.2,<15
493
+ - vc14_runtime >=14.29.30139
494
+ constrains:
495
+ - __cuda >=11
496
+ license: LicenseRef-NVIDIA-End-User-License-Agreement
497
+ purls: []
498
+ size: 726105666
499
+ timestamp: 1706883637901
500
+ - conda: https://conda.anaconda.org/conda-forge/linux-64/cudnn-8.9.7.29-hbc23b4c_3.conda
501
+ sha256: c553234d447d9938556f067aba7a4686c8e5427e03e740e67199da3782cc420c
502
+ md5: 4a2d5fab2871d95544de4e1752948d0f
503
+ depends:
504
+ - __glibc >=2.17
505
+ - __glibc >=2.17,<3.0.a0
506
+ - cuda-version >=11.0,<12.0a0
507
+ - cudatoolkit 11.*
508
+ - libgcc-ng >=12
509
+ - libstdcxx-ng >=12
510
+ - libzlib >=1.2.13,<2.0.0a0
511
+ license: LicenseRef-cuDNN-Software-License-Agreement
512
+ purls: []
513
+ size: 465458543
514
+ timestamp: 1710307873021
515
+ - conda: https://conda.anaconda.org/conda-forge/win-64/cudnn-8.9.7.29-he6de189_3.conda
516
+ sha256: 140b25e4df96d317de8a2ebf744b9e8e763a8a146acfdd8acd411659c6dafd80
517
+ md5: 083d66898b460391c0b912e92e141250
518
+ depends:
519
+ - cuda-version >=11.0,<12.0a0
520
+ - cudatoolkit 11.*
521
+ - libzlib-wapi >=1.2.13,<1.3.0a0
522
+ - ucrt >=10.0.20348.0
523
+ - vc >=14.2,<15
524
+ - vc14_runtime >=14.29.30139
525
+ license: LicenseRef-cuDNN-Software-License-Agreement
526
+ purls: []
527
+ size: 458138472
528
+ timestamp: 1710308712723
529
+ - pypi: https://files.pythonhosted.org/packages/12/b3/231ffd4ab1fc9d679809f356cebee130ac7daa00d6d6f3206dd4fd137e9e/distro-1.9.0-py3-none-any.whl
530
+ name: distro
531
+ version: 1.9.0
532
+ sha256: 7bffd925d65168f85027d8da9af6bddab658135b840670a223589bc0c8ef02b2
533
+ requires_python: '>=3.6'
534
+ - pypi: https://files.pythonhosted.org/packages/f8/37/00c211f1021f9b04dde72dcbee72ce66248519c3899a47b06f8940a67c08/edge_tts-7.0.0-py3-none-any.whl
535
+ name: edge-tts
536
+ version: 7.0.0
537
+ sha256: c99e91caba83c28e6f1fff1098a8188f541ba9615944c7a6f8f5625e02848044
538
+ requires_dist:
539
+ - aiohttp>=3.8.0,<4.0.0
540
+ - certifi>=2023.11.17
541
+ - srt>=3.4.1,<4.0.0
542
+ - tabulate>=0.4.4,<1.0.0
543
+ - typing-extensions>=4.1.0,<5.0.0
544
+ - black ; extra == 'dev'
545
+ - isort ; extra == 'dev'
546
+ - mypy ; extra == 'dev'
547
+ - pylint ; extra == 'dev'
548
+ - types-tabulate ; extra == 'dev'
549
+ requires_python: '>=3.7'
550
+ - pypi: https://files.pythonhosted.org/packages/8f/7d/2d6ce181d7a5f51dedb8c06206cbf0ec026a99bf145edd309f9e17c3282f/fastapi-0.115.8-py3-none-any.whl
551
+ name: fastapi
552
+ version: 0.115.8
553
+ sha256: 753a96dd7e036b34eeef8babdfcfe3f28ff79648f86551eb36bfc1b0bf4a8cbf
554
+ requires_dist:
555
+ - starlette>=0.40.0,<0.46.0
556
+ - pydantic>=1.7.4,!=1.8,!=1.8.1,!=2.0.0,!=2.0.1,!=2.1.0,<3.0.0
557
+ - typing-extensions>=4.8.0
558
+ - fastapi-cli[standard]>=0.0.5 ; extra == 'standard'
559
+ - httpx>=0.23.0 ; extra == 'standard'
560
+ - jinja2>=3.1.5 ; extra == 'standard'
561
+ - python-multipart>=0.0.18 ; extra == 'standard'
562
+ - email-validator>=2.0.0 ; extra == 'standard'
563
+ - uvicorn[standard]>=0.12.0 ; extra == 'standard'
564
+ - fastapi-cli[standard]>=0.0.5 ; extra == 'all'
565
+ - httpx>=0.23.0 ; extra == 'all'
566
+ - jinja2>=3.1.5 ; extra == 'all'
567
+ - python-multipart>=0.0.18 ; extra == 'all'
568
+ - itsdangerous>=1.1.0 ; extra == 'all'
569
+ - pyyaml>=5.3.1 ; extra == 'all'
570
+ - ujson>=4.0.1,!=4.0.2,!=4.1.0,!=4.2.0,!=4.3.0,!=5.0.0,!=5.1.0 ; extra == 'all'
571
+ - orjson>=3.2.1 ; extra == 'all'
572
+ - email-validator>=2.0.0 ; extra == 'all'
573
+ - uvicorn[standard]>=0.12.0 ; extra == 'all'
574
+ - pydantic-settings>=2.0.0 ; extra == 'all'
575
+ - pydantic-extra-types>=2.0.0 ; extra == 'all'
576
+ requires_python: '>=3.8'
577
+ - pypi: https://files.pythonhosted.org/packages/0e/e2/b066e6e02d67bf5261a6d7539648c6da3365cc9eff3eb6d82009595d84d9/flatbuffers-25.1.24-py2.py3-none-any.whl
578
+ name: flatbuffers
579
+ version: 25.1.24
580
+ sha256: 1abfebaf4083117225d0723087ea909896a34e3fec933beedb490d595ba24145
581
+ - pypi: https://files.pythonhosted.org/packages/af/f2/64b73a9bb86f5a89fb55450e97cd5c1f84a862d4ff90d9fd1a73ab0f64a5/frozenlist-1.5.0-cp312-cp312-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl
582
+ name: frozenlist
583
+ version: 1.5.0
584
+ sha256: 000a77d6034fbad9b6bb880f7ec073027908f1b40254b5d6f26210d2dab1240e
585
+ requires_python: '>=3.8'
586
+ - pypi: https://files.pythonhosted.org/packages/b1/56/4e45136ffc6bdbfa68c29ca56ef53783ef4c2fd395f7cbf99a2624aa9aaa/frozenlist-1.5.0-cp312-cp312-win_amd64.whl
587
+ name: frozenlist
588
+ version: 1.5.0
589
+ sha256: 8969190d709e7c48ea386db202d708eb94bdb29207a1f269bab1196ce0dcca1f
590
+ requires_python: '>=3.8'
591
+ - pypi: https://files.pythonhosted.org/packages/b0/6c/5a53d632b44ef7655ac8d9b34432e13160917f9307c94b1467efd34e336e/groq-0.18.0-py3-none-any.whl
592
+ name: groq
593
+ version: 0.18.0
594
+ sha256: 81d5ac00057a45d8ce559d23ab5d3b3893011d1f12c35187ab35a9182d826ea6
595
+ requires_dist:
596
+ - anyio>=3.5.0,<5
597
+ - distro>=1.7.0,<2
598
+ - httpx>=0.23.0,<1
599
+ - pydantic>=1.9.0,<3
600
+ - sniffio
601
+ - typing-extensions>=4.10,<5
602
+ requires_python: '>=3.8'
603
+ - pypi: https://files.pythonhosted.org/packages/95/04/ff642e65ad6b90db43e668d70ffb6736436c7ce41fcc549f4e9472234127/h11-0.14.0-py3-none-any.whl
604
+ name: h11
605
+ version: 0.14.0
606
+ sha256: e3fe4ac4b851c468cc8363d500db52c2ead036020723024a109d37346efaa761
607
+ requires_dist:
608
+ - typing-extensions ; python_full_version < '3.8'
609
+ requires_python: '>=3.7'
610
+ - pypi: https://files.pythonhosted.org/packages/87/f5/72347bc88306acb359581ac4d52f23c0ef445b57157adedb9aee0cd689d2/httpcore-1.0.7-py3-none-any.whl
611
+ name: httpcore
612
+ version: 1.0.7
613
+ sha256: a3fff8f43dc260d5bd363d9f9cf1830fa3a458b332856f34282de498ed420edd
614
+ requires_dist:
615
+ - certifi
616
+ - h11>=0.13,<0.15
617
+ - anyio>=4.0,<5.0 ; extra == 'asyncio'
618
+ - h2>=3,<5 ; extra == 'http2'
619
+ - socksio==1.* ; extra == 'socks'
620
+ - trio>=0.22.0,<1.0 ; extra == 'trio'
621
+ requires_python: '>=3.8'
622
+ - pypi: https://files.pythonhosted.org/packages/12/b7/5cae71a8868e555f3f67a50ee7f673ce36eac970f029c0c5e9d584352961/httptools-0.6.4-cp312-cp312-win_amd64.whl
623
+ name: httptools
624
+ version: 0.6.4
625
+ sha256: db78cb9ca56b59b016e64b6031eda5653be0589dba2b1b43453f6e8b405a0970
626
+ requires_dist:
627
+ - cython>=0.29.24 ; extra == 'test'
628
+ requires_python: '>=3.8.0'
629
+ - pypi: https://files.pythonhosted.org/packages/f7/d8/b644c44acc1368938317d76ac991c9bba1166311880bcc0ac297cb9d6bd7/httptools-0.6.4-cp312-cp312-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl
630
+ name: httptools
631
+ version: 0.6.4
632
+ sha256: 16e603a3bff50db08cd578d54f07032ca1631450ceb972c2f834c2b860c28ea2
633
+ requires_dist:
634
+ - cython>=0.29.24 ; extra == 'test'
635
+ requires_python: '>=3.8.0'
636
+ - pypi: https://files.pythonhosted.org/packages/2a/39/e50c7c3a983047577ee07d2a9e53faf5a69493943ec3f6a384bdc792deb2/httpx-0.28.1-py3-none-any.whl
637
+ name: httpx
638
+ version: 0.28.1
639
+ sha256: d909fcccc110f8c7faf814ca82a9a4d816bc5a6dbfea25d6591d6985b8ba59ad
640
+ requires_dist:
641
+ - anyio
642
+ - certifi
643
+ - httpcore==1.*
644
+ - idna
645
+ - brotli ; platform_python_implementation == 'CPython' and extra == 'brotli'
646
+ - brotlicffi ; platform_python_implementation != 'CPython' and extra == 'brotli'
647
+ - click==8.* ; extra == 'cli'
648
+ - pygments==2.* ; extra == 'cli'
649
+ - rich>=10,<14 ; extra == 'cli'
650
+ - h2>=3,<5 ; extra == 'http2'
651
+ - socksio==1.* ; extra == 'socks'
652
+ - zstandard>=0.18.0 ; extra == 'zstd'
653
+ requires_python: '>=3.8'
654
+ - pypi: https://files.pythonhosted.org/packages/f0/0f/310fb31e39e2d734ccaa2c0fb981ee41f7bd5056ce9bc29b2248bd569169/humanfriendly-10.0-py2.py3-none-any.whl
655
+ name: humanfriendly
656
+ version: '10.0'
657
+ sha256: 1697e1a8a8f550fd43c2865cd84542fc175a61dcb779b6fee18cf6b6ccba1477
658
+ requires_dist:
659
+ - monotonic ; python_full_version == '2.7.*'
660
+ - pyreadline ; python_full_version < '3.8' and sys_platform == 'win32'
661
+ - pyreadline3 ; python_full_version >= '3.8' and sys_platform == 'win32'
662
+ requires_python: '>=2.7,!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*,!=3.4.*'
663
+ - pypi: https://files.pythonhosted.org/packages/76/c6/c88e154df9c4e1a2a66ccf0005a88dfb2650c1dffb6f5ce603dfbd452ce3/idna-3.10-py3-none-any.whl
664
+ name: idna
665
+ version: '3.10'
666
+ sha256: 946d195a0d259cbba61165e88e65941f16e9b36ea6ddb97f00452bae8b1287d3
667
+ requires_dist:
668
+ - ruff>=0.6.2 ; extra == 'all'
669
+ - mypy>=1.11.2 ; extra == 'all'
670
+ - pytest>=8.3.2 ; extra == 'all'
671
+ - flake8>=7.1.1 ; extra == 'all'
672
+ requires_python: '>=3.6'
673
+ - pypi: https://files.pythonhosted.org/packages/17/61/beea645c0bf398ced8b199e377b61eb999d8e46e053bb285c91c3d3eaab0/jiter-0.8.2-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
674
+ name: jiter
675
+ version: 0.8.2
676
+ sha256: 14601dcac4889e0a1c75ccf6a0e4baf70dbc75041e51bcf8d0e9274519df6887
677
+ requires_python: '>=3.8'
678
+ - pypi: https://files.pythonhosted.org/packages/41/69/6d4bbe66b3b3b4507e47aa1dd5d075919ad242b4b1115b3f80eecd443687/jiter-0.8.2-cp312-cp312-win_amd64.whl
679
+ name: jiter
680
+ version: 0.8.2
681
+ sha256: 83c0efd80b29695058d0fd2fa8a556490dbce9804eac3e281f373bbc99045f6c
682
+ requires_python: '>=3.8'
683
+ - pypi: https://files.pythonhosted.org/packages/0e/72/a3add0e4eec4eb9e2569554f7c70f4a3c27712f40e3284d483e88094cc0e/langdetect-1.0.9.tar.gz
684
+ name: langdetect
685
+ version: 1.0.9
686
+ sha256: cbc1fef89f8d062739774bd51eda3da3274006b3661d199c2655f6b3f6d605a0
687
+ requires_dist:
688
+ - six
689
+ - conda: https://conda.anaconda.org/conda-forge/linux-64/ld_impl_linux-64-2.43-h712a8e2_2.conda
690
+ sha256: 7c91cea91b13f4314d125d1bedb9d03a29ebbd5080ccdea70260363424646dbe
691
+ md5: 048b02e3962f066da18efe3a21b77672
692
+ depends:
693
+ - __glibc >=2.17,<3.0.a0
694
+ constrains:
695
+ - binutils_impl_linux-64 2.43
696
+ license: GPL-3.0-only
697
+ license_family: GPL
698
+ purls: []
699
+ size: 669211
700
+ timestamp: 1729655358674
701
+ - conda: https://conda.anaconda.org/conda-forge/linux-64/libexpat-2.6.4-h5888daf_0.conda
702
+ sha256: 56541b98447b58e52d824bd59d6382d609e11de1f8adf20b23143e353d2b8d26
703
+ md5: db833e03127376d461e1e13e76f09b6c
704
+ depends:
705
+ - __glibc >=2.17,<3.0.a0
706
+ - libgcc >=13
707
+ constrains:
708
+ - expat 2.6.4.*
709
+ license: MIT
710
+ license_family: MIT
711
+ purls: []
712
+ size: 73304
713
+ timestamp: 1730967041968
714
+ - conda: https://conda.anaconda.org/conda-forge/win-64/libexpat-2.6.4-he0c23c2_0.conda
715
+ sha256: 0c0447bf20d1013d5603499de93a16b6faa92d7ead870d96305c0f065b6a5a12
716
+ md5: eb383771c680aa792feb529eaf9df82f
717
+ depends:
718
+ - ucrt >=10.0.20348.0
719
+ - vc >=14.2,<15
720
+ - vc14_runtime >=14.29.30139
721
+ constrains:
722
+ - expat 2.6.4.*
723
+ license: MIT
724
+ license_family: MIT
725
+ purls: []
726
+ size: 139068
727
+ timestamp: 1730967442102
728
+ - conda: https://conda.anaconda.org/conda-forge/linux-64/libffi-3.4.2-h7f98852_5.tar.bz2
729
+ sha256: ab6e9856c21709b7b517e940ae7028ae0737546122f83c2aa5d692860c3b149e
730
+ md5: d645c6d2ac96843a2bfaccd2d62b3ac3
731
+ depends:
732
+ - libgcc-ng >=9.4.0
733
+ license: MIT
734
+ license_family: MIT
735
+ purls: []
736
+ size: 58292
737
+ timestamp: 1636488182923
738
+ - conda: https://conda.anaconda.org/conda-forge/win-64/libffi-3.4.2-h8ffe710_5.tar.bz2
739
+ sha256: 1951ab740f80660e9bc07d2ed3aefb874d78c107264fd810f24a1a6211d4b1a5
740
+ md5: 2c96d1b6915b408893f9472569dee135
741
+ depends:
742
+ - vc >=14.1,<15.0a0
743
+ - vs2015_runtime >=14.16.27012
744
+ license: MIT
745
+ license_family: MIT
746
+ purls: []
747
+ size: 42063
748
+ timestamp: 1636489106777
749
+ - conda: https://conda.anaconda.org/conda-forge/linux-64/libgcc-14.2.0-h77fa898_1.conda
750
+ sha256: 53eb8a79365e58849e7b1a068d31f4f9e718dc938d6f2c03e960345739a03569
751
+ md5: 3cb76c3f10d3bc7f1105b2fc9db984df
752
+ depends:
753
+ - _libgcc_mutex 0.1 conda_forge
754
+ - _openmp_mutex >=4.5
755
+ constrains:
756
+ - libgomp 14.2.0 h77fa898_1
757
+ - libgcc-ng ==14.2.0=*_1
758
+ license: GPL-3.0-only WITH GCC-exception-3.1
759
+ license_family: GPL
760
+ purls: []
761
+ size: 848745
762
+ timestamp: 1729027721139
763
+ - conda: https://conda.anaconda.org/conda-forge/linux-64/libgcc-ng-14.2.0-h69a702a_1.conda
764
+ sha256: 3a76969c80e9af8b6e7a55090088bc41da4cffcde9e2c71b17f44d37b7cb87f7
765
+ md5: e39480b9ca41323497b05492a63bc35b
766
+ depends:
767
+ - libgcc 14.2.0 h77fa898_1
768
+ license: GPL-3.0-only WITH GCC-exception-3.1
769
+ license_family: GPL
770
+ purls: []
771
+ size: 54142
772
+ timestamp: 1729027726517
773
+ - conda: https://conda.anaconda.org/conda-forge/linux-64/libgomp-14.2.0-h77fa898_1.conda
774
+ sha256: 1911c29975ec99b6b906904040c855772ccb265a1c79d5d75c8ceec4ed89cd63
775
+ md5: cc3573974587f12dda90d96e3e55a702
776
+ depends:
777
+ - _libgcc_mutex 0.1 conda_forge
778
+ license: GPL-3.0-only WITH GCC-exception-3.1
779
+ license_family: GPL
780
+ purls: []
781
+ size: 460992
782
+ timestamp: 1729027639220
783
+ - conda: https://conda.anaconda.org/conda-forge/linux-64/liblzma-5.6.4-hb9d3cd8_0.conda
784
+ sha256: cad52e10319ca4585bc37f0bc7cce99ec7c15dc9168e42ccb96b741b0a27db3f
785
+ md5: 42d5b6a0f30d3c10cd88cb8584fda1cb
786
+ depends:
787
+ - __glibc >=2.17,<3.0.a0
788
+ - libgcc >=13
789
+ license: 0BSD
790
+ purls: []
791
+ size: 111357
792
+ timestamp: 1738525339684
793
+ - conda: https://conda.anaconda.org/conda-forge/win-64/liblzma-5.6.4-h2466b09_0.conda
794
+ sha256: 3f552b0bdefdd1459ffc827ea3bf70a6a6920c7879d22b6bfd0d73015b55227b
795
+ md5: c48f6ad0ef0a555b27b233dfcab46a90
796
+ depends:
797
+ - ucrt >=10.0.20348.0
798
+ - vc >=14.2,<15
799
+ - vc14_runtime >=14.29.30139
800
+ license: 0BSD
801
+ purls: []
802
+ size: 104465
803
+ timestamp: 1738525557254
804
+ - conda: https://conda.anaconda.org/conda-forge/linux-64/libnsl-2.0.1-hd590300_0.conda
805
+ sha256: 26d77a3bb4dceeedc2a41bd688564fe71bf2d149fdcf117049970bc02ff1add6
806
+ md5: 30fd6e37fe21f86f4bd26d6ee73eeec7
807
+ depends:
808
+ - libgcc-ng >=12
809
+ license: LGPL-2.1-only
810
+ license_family: GPL
811
+ purls: []
812
+ size: 33408
813
+ timestamp: 1697359010159
814
+ - conda: https://conda.anaconda.org/conda-forge/linux-64/libsqlite-3.49.0-hee588c1_0.conda
815
+ sha256: 51c68794a1f32c2742cbcfc8ea8cd730bb19148bed437197380778d3a6d49385
816
+ md5: a12aa55f2a4446af8aa44d69ac563d58
817
+ depends:
818
+ - __glibc >=2.17,<3.0.a0
819
+ - libgcc >=13
820
+ - libzlib >=1.3.1,<2.0a0
821
+ license: Unlicense
822
+ purls: []
823
+ size: 917347
824
+ timestamp: 1739176276854
825
+ - conda: https://conda.anaconda.org/conda-forge/win-64/libsqlite-3.49.0-h67fdade_0.conda
826
+ sha256: ff6670afcef19234145cd7e2e3b97a6a591a09f2b98d0920b140bf175979db47
827
+ md5: b5aae7f19da05cb648ca214bf14ddd81
828
+ depends:
829
+ - ucrt >=10.0.20348.0
830
+ - vc >=14.2,<15
831
+ - vc14_runtime >=14.29.30139
832
+ license: Unlicense
833
+ purls: []
834
+ size: 1082337
835
+ timestamp: 1739176678486
836
+ - conda: https://conda.anaconda.org/conda-forge/linux-64/libstdcxx-14.2.0-hc0a3c3a_1.conda
837
+ sha256: 4661af0eb9bdcbb5fb33e5d0023b001ad4be828fccdcc56500059d56f9869462
838
+ md5: 234a5554c53625688d51062645337328
839
+ depends:
840
+ - libgcc 14.2.0 h77fa898_1
841
+ license: GPL-3.0-only WITH GCC-exception-3.1
842
+ license_family: GPL
843
+ purls: []
844
+ size: 3893695
845
+ timestamp: 1729027746910
846
+ - conda: https://conda.anaconda.org/conda-forge/linux-64/libstdcxx-ng-14.2.0-h4852527_1.conda
847
+ sha256: 25bb30b827d4f6d6f0522cc0579e431695503822f144043b93c50237017fffd8
848
+ md5: 8371ac6457591af2cf6159439c1fd051
849
+ depends:
850
+ - libstdcxx 14.2.0 hc0a3c3a_1
851
+ license: GPL-3.0-only WITH GCC-exception-3.1
852
+ license_family: GPL
853
+ purls: []
854
+ size: 54105
855
+ timestamp: 1729027780628
856
+ - conda: https://conda.anaconda.org/conda-forge/linux-64/libuuid-2.38.1-h0b41bf4_0.conda
857
+ sha256: 787eb542f055a2b3de553614b25f09eefb0a0931b0c87dbcce6efdfd92f04f18
858
+ md5: 40b61aab5c7ba9ff276c41cfffe6b80b
859
+ depends:
860
+ - libgcc-ng >=12
861
+ license: BSD-3-Clause
862
+ license_family: BSD
863
+ purls: []
864
+ size: 33601
865
+ timestamp: 1680112270483
866
+ - conda: https://conda.anaconda.org/conda-forge/linux-64/libxcrypt-4.4.36-hd590300_1.conda
867
+ sha256: 6ae68e0b86423ef188196fff6207ed0c8195dd84273cb5623b85aa08033a410c
868
+ md5: 5aa797f8787fe7a17d1b0821485b5adc
869
+ depends:
870
+ - libgcc-ng >=12
871
+ license: LGPL-2.1-or-later
872
+ purls: []
873
+ size: 100393
874
+ timestamp: 1702724383534
875
+ - conda: https://conda.anaconda.org/conda-forge/linux-64/libzlib-1.3.1-hb9d3cd8_2.conda
876
+ sha256: d4bfe88d7cb447768e31650f06257995601f89076080e76df55e3112d4e47dc4
877
+ md5: edb0dca6bc32e4f4789199455a1dbeb8
878
+ depends:
879
+ - __glibc >=2.17,<3.0.a0
880
+ - libgcc >=13
881
+ constrains:
882
+ - zlib 1.3.1 *_2
883
+ license: Zlib
884
+ license_family: Other
885
+ purls: []
886
+ size: 60963
887
+ timestamp: 1727963148474
888
+ - conda: https://conda.anaconda.org/conda-forge/win-64/libzlib-1.3.1-h2466b09_2.conda
889
+ sha256: ba945c6493449bed0e6e29883c4943817f7c79cbff52b83360f7b341277c6402
890
+ md5: 41fbfac52c601159df6c01f875de31b9
891
+ depends:
892
+ - ucrt >=10.0.20348.0
893
+ - vc >=14.2,<15
894
+ - vc14_runtime >=14.29.30139
895
+ constrains:
896
+ - zlib 1.3.1 *_2
897
+ license: Zlib
898
+ license_family: Other
899
+ purls: []
900
+ size: 55476
901
+ timestamp: 1727963768015
902
+ - conda: https://conda.anaconda.org/conda-forge/win-64/libzlib-wapi-1.2.13-h2466b09_6.conda
903
+ sha256: dd92ecd1f39e17623fd6149cf0dc7f675d2f9e091ef2b1774a85e9e545434102
904
+ md5: 84ac5ada002445227139f8f659cf6d93
905
+ depends:
906
+ - ucrt >=10.0.20348.0
907
+ - vc >=14.2,<15
908
+ - vc14_runtime >=14.29.30139
909
+ constrains:
910
+ - zlib-wapi 1.2.13 *_6
911
+ - zlib 1.2.13 *_6
912
+ license: Zlib
913
+ license_family: Other
914
+ purls: []
915
+ size: 56048
916
+ timestamp: 1716874638604
917
+ - pypi: https://files.pythonhosted.org/packages/0c/29/0348de65b8cc732daa3e33e67806420b2ae89bdce2b04af740289c5c6c8c/loguru-0.7.3-py3-none-any.whl
918
+ name: loguru
919
+ version: 0.7.3
920
+ sha256: 31a33c10c8e1e10422bfd431aeb5d351c7cf7fa671e3c4df004162264b28220c
921
+ requires_dist:
922
+ - colorama>=0.3.4 ; sys_platform == 'win32'
923
+ - aiocontextvars>=0.2.0 ; python_full_version < '3.7'
924
+ - win32-setctime>=1.0.0 ; sys_platform == 'win32'
925
+ - pre-commit==4.0.1 ; python_full_version >= '3.9' and extra == 'dev'
926
+ - tox==3.27.1 ; python_full_version < '3.8' and extra == 'dev'
927
+ - tox==4.23.2 ; python_full_version >= '3.8' and extra == 'dev'
928
+ - pytest==6.1.2 ; python_full_version < '3.8' and extra == 'dev'
929
+ - pytest==8.3.2 ; python_full_version >= '3.8' and extra == 'dev'
930
+ - pytest-cov==2.12.1 ; python_full_version < '3.8' and extra == 'dev'
931
+ - pytest-cov==5.0.0 ; python_full_version == '3.8.*' and extra == 'dev'
932
+ - pytest-cov==6.0.0 ; python_full_version >= '3.9' and extra == 'dev'
933
+ - pytest-mypy-plugins==1.9.3 ; python_full_version >= '3.6' and python_full_version < '3.8' and extra == 'dev'
934
+ - pytest-mypy-plugins==3.1.0 ; python_full_version >= '3.8' and extra == 'dev'
935
+ - colorama==0.4.5 ; python_full_version < '3.8' and extra == 'dev'
936
+ - colorama==0.4.6 ; python_full_version >= '3.8' and extra == 'dev'
937
+ - freezegun==1.1.0 ; python_full_version < '3.8' and extra == 'dev'
938
+ - freezegun==1.5.0 ; python_full_version >= '3.8' and extra == 'dev'
939
+ - exceptiongroup==1.1.3 ; python_full_version >= '3.7' and python_full_version < '3.11' and extra == 'dev'
940
+ - mypy==0.910 ; python_full_version < '3.6' and extra == 'dev'
941
+ - mypy==0.971 ; python_full_version == '3.6.*' and extra == 'dev'
942
+ - mypy==1.4.1 ; python_full_version == '3.7.*' and extra == 'dev'
943
+ - mypy==1.13.0 ; python_full_version >= '3.8' and extra == 'dev'
944
+ - sphinx==8.1.3 ; python_full_version >= '3.11' and extra == 'dev'
945
+ - sphinx-rtd-theme==3.0.2 ; python_full_version >= '3.11' and extra == 'dev'
946
+ - myst-parser==4.0.0 ; python_full_version >= '3.11' and extra == 'dev'
947
+ - build==1.2.2 ; python_full_version >= '3.11' and extra == 'dev'
948
+ - twine==6.0.1 ; python_full_version >= '3.11' and extra == 'dev'
949
+ requires_python: '>=3.5,<4.0'
950
+ - pypi: https://files.pythonhosted.org/packages/43/e3/7d92a15f894aa0c9c4b49b8ee9ac9850d6e63b03c9c32c0367a13ae62209/mpmath-1.3.0-py3-none-any.whl
951
+ name: mpmath
952
+ version: 1.3.0
953
+ sha256: a0b2b9fe80bbcd81a6647ff13108738cfb482d481d826cc0e02f5b35e5c88d2c
954
+ requires_dist:
955
+ - pytest>=4.6 ; extra == 'develop'
956
+ - pycodestyle ; extra == 'develop'
957
+ - pytest-cov ; extra == 'develop'
958
+ - codecov ; extra == 'develop'
959
+ - wheel ; extra == 'develop'
960
+ - sphinx ; extra == 'docs'
961
+ - gmpy2>=2.1.0a4 ; platform_python_implementation != 'PyPy' and extra == 'gmpy'
962
+ - pytest>=4.6 ; extra == 'tests'
963
+ - pypi: https://files.pythonhosted.org/packages/a3/bf/f332a13486b1ed0496d624bcc7e8357bb8053823e8cd4b9a18edc1d97e73/multidict-6.1.0-cp312-cp312-win_amd64.whl
964
+ name: multidict
965
+ version: 6.1.0
966
+ sha256: 188215fc0aafb8e03341995e7c4797860181562380f81ed0a87ff455b70bf1f1
967
+ requires_dist:
968
+ - typing-extensions>=4.1.0 ; python_full_version < '3.11'
969
+ requires_python: '>=3.8'
970
+ - pypi: https://files.pythonhosted.org/packages/d3/c8/529101d7176fe7dfe1d99604e48d69c5dfdcadb4f06561f465c8ef12b4df/multidict-6.1.0-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
971
+ name: multidict
972
+ version: 6.1.0
973
+ sha256: 4b820514bfc0b98a30e3d85462084779900347e4d49267f747ff54060cc33925
974
+ requires_dist:
975
+ - typing-extensions>=4.1.0 ; python_full_version < '3.11'
976
+ requires_python: '>=3.8'
977
+ - conda: https://conda.anaconda.org/conda-forge/linux-64/ncurses-6.5-h2d0b736_3.conda
978
+ sha256: 3fde293232fa3fca98635e1167de6b7c7fda83caf24b9d6c91ec9eefb4f4d586
979
+ md5: 47e340acb35de30501a76c7c799c41d7
980
+ depends:
981
+ - __glibc >=2.17,<3.0.a0
982
+ - libgcc >=13
983
+ license: X11 AND BSD-3-Clause
984
+ purls: []
985
+ size: 891641
986
+ timestamp: 1738195959188
987
+ - pypi: https://files.pythonhosted.org/packages/0f/50/de23fde84e45f5c4fda2488c759b69990fd4512387a8632860f3ac9cd225/numpy-1.26.4-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
988
+ name: numpy
989
+ version: 1.26.4
990
+ sha256: 675d61ffbfa78604709862923189bad94014bef562cc35cf61d3a07bba02a7ed
991
+ requires_python: '>=3.9'
992
+ - pypi: https://files.pythonhosted.org/packages/16/2e/86f24451c2d530c88daf997cb8d6ac622c1d40d19f5a031ed68a4b73a374/numpy-1.26.4-cp312-cp312-win_amd64.whl
993
+ name: numpy
994
+ version: 1.26.4
995
+ sha256: 08beddf13648eb95f8d867350f6a018a4be2e5ad54c8d8caed89ebca558b2818
996
+ requires_python: '>=3.9'
997
+ - pypi: https://files.pythonhosted.org/packages/47/42/2f71f5680834688a9c81becbe5c5bb996fd33eaed5c66ae0606c3b1d6a02/onnxruntime-1.20.1-cp312-cp312-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl
998
+ name: onnxruntime
999
+ version: 1.20.1
1000
+ sha256: bb71a814f66517a65628c9e4a2bb530a6edd2cd5d87ffa0af0f6f773a027d99e
1001
+ requires_dist:
1002
+ - coloredlogs
1003
+ - flatbuffers
1004
+ - numpy>=1.21.6
1005
+ - packaging
1006
+ - protobuf
1007
+ - sympy
1008
+ - pypi: https://files.pythonhosted.org/packages/dd/80/76979e0b744307d488c79e41051117634b956612cc731f1028eb17ee7294/onnxruntime-1.20.1-cp312-cp312-win_amd64.whl
1009
+ name: onnxruntime
1010
+ version: 1.20.1
1011
+ sha256: 19c2d843eb074f385e8bbb753a40df780511061a63f9def1b216bf53860223fb
1012
+ requires_dist:
1013
+ - coloredlogs
1014
+ - flatbuffers
1015
+ - numpy>=1.21.6
1016
+ - packaging
1017
+ - protobuf
1018
+ - sympy
1019
+ - pypi: .
1020
+ name: open-llm-vtuber
1021
+ version: 1.0.3
1022
+ sha256: 006c9a1b22f28c2ed3da9a98b45007a1b13afdb34bc66a2a780355a2a6ae55d3
1023
+ requires_dist:
1024
+ - anthropic>=0.40.0
1025
+ - azure-cognitiveservices-speech>=1.41.1
1026
+ - chardet>=5.2.0
1027
+ - edge-tts>=7.0.0
1028
+ - fastapi>=0.115.6
1029
+ - groq>=0.13.0
1030
+ - httpx>=0.28.1
1031
+ - langdetect>=1.0.9
1032
+ - loguru>=0.7.2
1033
+ - numpy>=1.26.4,<2
1034
+ - onnxruntime>=1.20.1
1035
+ - openai>=1.57.4
1036
+ - pydub>=0.25.1
1037
+ - pysbd>=0.3.4
1038
+ - pyttsx3>=2.98
1039
+ - pyyaml>=6.0.2
1040
+ - requests>=2.32.3
1041
+ - ruff>=0.8.6
1042
+ - scipy>=1.14.1
1043
+ - sherpa-onnx>=1.10.39
1044
+ - soundfile>=0.12.1
1045
+ - tomli>=2.2.1
1046
+ - tqdm>=4.67.1
1047
+ - uvicorn[standard]>=0.33.0
1048
+ - websocket-client>=1.8.0
1049
+ requires_python: '>=3.10,<3.13'
1050
+ editable: true
1051
+ - pypi: https://files.pythonhosted.org/packages/9a/b6/2e2a011b2dc27a6711376808b4cd8c922c476ea0f1420b39892117fa8563/openai-1.61.1-py3-none-any.whl
1052
+ name: openai
1053
+ version: 1.61.1
1054
+ sha256: 72b0826240ce26026ac2cd17951691f046e5be82ad122d20a8e1b30ca18bd11e
1055
+ requires_dist:
1056
+ - anyio>=3.5.0,<5
1057
+ - distro>=1.7.0,<2
1058
+ - httpx>=0.23.0,<1
1059
+ - jiter>=0.4.0,<1
1060
+ - pydantic>=1.9.0,<3
1061
+ - sniffio
1062
+ - tqdm>4
1063
+ - typing-extensions>=4.11,<5
1064
+ - numpy>=1 ; extra == 'datalib'
1065
+ - pandas-stubs>=1.1.0.11 ; extra == 'datalib'
1066
+ - pandas>=1.2.3 ; extra == 'datalib'
1067
+ - websockets>=13,<15 ; extra == 'realtime'
1068
+ requires_python: '>=3.8'
1069
+ - conda: https://conda.anaconda.org/conda-forge/linux-64/openssl-3.4.0-h7b32b05_1.conda
1070
+ sha256: f62f6bca4a33ca5109b6d571b052a394d836956d21b25b7ffd03376abf7a481f
1071
+ md5: 4ce6875f75469b2757a65e10a5d05e31
1072
+ depends:
1073
+ - __glibc >=2.17,<3.0.a0
1074
+ - ca-certificates
1075
+ - libgcc >=13
1076
+ license: Apache-2.0
1077
+ license_family: Apache
1078
+ purls: []
1079
+ size: 2937158
1080
+ timestamp: 1736086387286
1081
+ - conda: https://conda.anaconda.org/conda-forge/win-64/openssl-3.4.0-ha4e3fda_1.conda
1082
+ sha256: 519a06eaab7c878fbebb8cab98ea4a4465eafb1e9ed8c6ce67226068a80a92f0
1083
+ md5: fb45308ba8bfe1abf1f4a27bad24a743
1084
+ depends:
1085
+ - ca-certificates
1086
+ - ucrt >=10.0.20348.0
1087
+ - vc >=14.2,<15
1088
+ - vc14_runtime >=14.29.30139
1089
+ license: Apache-2.0
1090
+ license_family: Apache
1091
+ purls: []
1092
+ size: 8462960
1093
+ timestamp: 1736088436984
1094
+ - pypi: https://files.pythonhosted.org/packages/88/ef/eb23f262cca3c0c4eb7ab1933c3b1f03d021f2c48f54763065b6f0e321be/packaging-24.2-py3-none-any.whl
1095
+ name: packaging
1096
+ version: '24.2'
1097
+ sha256: 09abb1bccd265c01f4a3aa3f7a7db064b36514d2cba19a2f694fe6150451a759
1098
+ requires_python: '>=3.8'
1099
+ - pypi: https://files.pythonhosted.org/packages/1c/07/ebe102777a830bca91bbb93e3479cd34c2ca5d0361b83be9dbd93104865e/propcache-0.2.1-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
1100
+ name: propcache
1101
+ version: 0.2.1
1102
+ sha256: 647894f5ae99c4cf6bb82a1bb3a796f6e06af3caa3d32e26d2350d0e3e3faf24
1103
+ requires_python: '>=3.9'
1104
+ - pypi: https://files.pythonhosted.org/packages/3b/77/a92c3ef994e47180862b9d7d11e37624fb1c00a16d61faf55115d970628b/propcache-0.2.1-cp312-cp312-win_amd64.whl
1105
+ name: propcache
1106
+ version: 0.2.1
1107
+ sha256: c214999039d4f2a5b2073ac506bba279945233da8c786e490d411dfc30f855c1
1108
+ requires_python: '>=3.9'
1109
+ - pypi: https://files.pythonhosted.org/packages/61/fa/aae8e10512b83de633f2646506a6d835b151edf4b30d18d73afd01447253/protobuf-5.29.3-cp310-abi3-win_amd64.whl
1110
+ name: protobuf
1111
+ version: 5.29.3
1112
+ sha256: a4fa6f80816a9a0678429e84973f2f98cbc218cca434abe8db2ad0bffc98503a
1113
+ requires_python: '>=3.8'
1114
+ - pypi: https://files.pythonhosted.org/packages/a8/45/2ebbde52ad2be18d3675b6bee50e68cd73c9e0654de77d595540b5129df8/protobuf-5.29.3-cp38-abi3-manylinux2014_x86_64.whl
1115
+ name: protobuf
1116
+ version: 5.29.3
1117
+ sha256: c027e08a08be10b67c06bf2370b99c811c466398c357e615ca88c91c07f0910f
1118
+ requires_python: '>=3.8'
1119
+ - pypi: https://files.pythonhosted.org/packages/13/a3/a812df4e2dd5696d1f351d58b8fe16a405b234ad2886a0dab9183fb78109/pycparser-2.22-py3-none-any.whl
1120
+ name: pycparser
1121
+ version: '2.22'
1122
+ sha256: c3702b6d3dd8c7abc1afa565d7e63d53a1d0bd86cdc24edd75470f4de499cfcc
1123
+ requires_python: '>=3.8'
1124
+ - pypi: https://files.pythonhosted.org/packages/f4/3c/8cc1cc84deffa6e25d2d0c688ebb80635dfdbf1dbea3e30c541c8cf4d860/pydantic-2.10.6-py3-none-any.whl
1125
+ name: pydantic
1126
+ version: 2.10.6
1127
+ sha256: 427d664bf0b8a2b34ff5dd0f5a18df00591adcee7198fbd71981054cef37b584
1128
+ requires_dist:
1129
+ - annotated-types>=0.6.0
1130
+ - pydantic-core==2.27.2
1131
+ - typing-extensions>=4.12.2
1132
+ - email-validator>=2.0.0 ; extra == 'email'
1133
+ - tzdata ; python_full_version >= '3.9' and platform_system == 'Windows' and extra == 'timezone'
1134
+ requires_python: '>=3.8'
1135
+ - pypi: https://files.pythonhosted.org/packages/1f/ea/cd7209a889163b8dcca139fe32b9687dd05249161a3edda62860430457a5/pydantic_core-2.27.2-cp312-cp312-win_amd64.whl
1136
+ name: pydantic-core
1137
+ version: 2.27.2
1138
+ sha256: cc3f1a99a4f4f9dd1de4fe0312c114e740b5ddead65bb4102884b384c15d8bc9
1139
+ requires_dist:
1140
+ - typing-extensions>=4.6.0,!=4.7.0
1141
+ requires_python: '>=3.8'
1142
+ - pypi: https://files.pythonhosted.org/packages/8d/f0/49129b27c43396581a635d8710dae54a791b17dfc50c70164866bbf865e3/pydantic_core-2.27.2-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
1143
+ name: pydantic-core
1144
+ version: 2.27.2
1145
+ sha256: 6fb4aadc0b9a0c063206846d603b92030eb6f03069151a625667f982887153e2
1146
+ requires_dist:
1147
+ - typing-extensions>=4.6.0,!=4.7.0
1148
+ requires_python: '>=3.8'
1149
+ - pypi: https://files.pythonhosted.org/packages/a6/53/d78dc063216e62fc55f6b2eebb447f6a4b0a59f55c8406376f76bf959b08/pydub-0.25.1-py2.py3-none-any.whl
1150
+ name: pydub
1151
+ version: 0.25.1
1152
+ sha256: 65617e33033874b59d87db603aa1ed450633288aefead953b30bded59cb599a6
1153
+ - pypi: https://files.pythonhosted.org/packages/d0/1b/2f292bbd742e369a100c91faa0483172cd91a1a422a6692055ac920946c5/pypiwin32-223-py3-none-any.whl
1154
+ name: pypiwin32
1155
+ version: '223'
1156
+ sha256: 67adf399debc1d5d14dffc1ab5acacb800da569754fafdc576b2a039485aa775
1157
+ requires_dist:
1158
+ - pywin32>=223
1159
+ - pypi: https://files.pythonhosted.org/packages/5a/dc/491b7661614ab97483abf2056be1deee4dc2490ecbf7bff9ab5cdbac86e1/pyreadline3-3.5.4-py3-none-any.whl
1160
+ name: pyreadline3
1161
+ version: 3.5.4
1162
+ sha256: eaf8e6cc3c49bcccf145fc6067ba8643d1df34d604a1ec0eccbf7a18e6d3fae6
1163
+ requires_dist:
1164
+ - build ; extra == 'dev'
1165
+ - flake8 ; extra == 'dev'
1166
+ - mypy ; extra == 'dev'
1167
+ - pytest ; extra == 'dev'
1168
+ - twine ; extra == 'dev'
1169
+ requires_python: '>=3.8'
1170
+ - pypi: https://files.pythonhosted.org/packages/48/0a/c99fb7d7e176f8b176ef19704a32e6a9c6aafdf19ef75a187f701fc15801/pysbd-0.3.4-py3-none-any.whl
1171
+ name: pysbd
1172
+ version: 0.3.4
1173
+ sha256: cd838939b7b0b185fcf86b0baf6636667dfb6e474743beeff878e9f42e022953
1174
+ requires_python: '>=3'
1175
+ - conda: https://conda.anaconda.org/conda-forge/linux-64/python-3.12.8-h9e4cc4f_1_cpython.conda
1176
+ build_number: 1
1177
+ sha256: 3f0e0518c992d8ccfe62b189125721309836fe48a010dc424240583e157f9ff0
1178
+ md5: 7fd2fd79436d9b473812f14e86746844
1179
+ depends:
1180
+ - __glibc >=2.17,<3.0.a0
1181
+ - bzip2 >=1.0.8,<2.0a0
1182
+ - ld_impl_linux-64 >=2.36.1
1183
+ - libexpat >=2.6.4,<3.0a0
1184
+ - libffi >=3.4,<4.0a0
1185
+ - libgcc >=13
1186
+ - liblzma >=5.6.3,<6.0a0
1187
+ - libnsl >=2.0.1,<2.1.0a0
1188
+ - libsqlite >=3.47.0,<4.0a0
1189
+ - libuuid >=2.38.1,<3.0a0
1190
+ - libxcrypt >=4.4.36
1191
+ - libzlib >=1.3.1,<2.0a0
1192
+ - ncurses >=6.5,<7.0a0
1193
+ - openssl >=3.4.0,<4.0a0
1194
+ - readline >=8.2,<9.0a0
1195
+ - tk >=8.6.13,<8.7.0a0
1196
+ - tzdata
1197
+ constrains:
1198
+ - python_abi 3.12.* *_cp312
1199
+ license: Python-2.0
1200
+ purls: []
1201
+ size: 31565686
1202
+ timestamp: 1733410597922
1203
+ - conda: https://conda.anaconda.org/conda-forge/win-64/python-3.12.8-h3f84c4b_1_cpython.conda
1204
+ build_number: 1
1205
+ sha256: e1b37a398b3e2ea363de7cff6706e5ec2a5eb36b211132150e8601d7afd8f3aa
1206
+ md5: 8cd0693344796fb32087185fca16f4cc
1207
+ depends:
1208
+ - bzip2 >=1.0.8,<2.0a0
1209
+ - libexpat >=2.6.4,<3.0a0
1210
+ - libffi >=3.4,<4.0a0
1211
+ - liblzma >=5.6.3,<6.0a0
1212
+ - libsqlite >=3.47.0,<4.0a0
1213
+ - libzlib >=1.3.1,<2.0a0
1214
+ - openssl >=3.4.0,<4.0a0
1215
+ - tk >=8.6.13,<8.7.0a0
1216
+ - tzdata
1217
+ - ucrt >=10.0.20348.0
1218
+ - vc >=14.2,<15
1219
+ - vc14_runtime >=14.29.30139
1220
+ constrains:
1221
+ - python_abi 3.12.* *_cp312
1222
+ license: Python-2.0
1223
+ purls: []
1224
+ size: 15812363
1225
+ timestamp: 1733408080064
1226
+ - pypi: https://files.pythonhosted.org/packages/6a/3e/b68c118422ec867fa7ab88444e1274aa40681c606d59ac27de5a5588f082/python_dotenv-1.0.1-py3-none-any.whl
1227
+ name: python-dotenv
1228
+ version: 1.0.1
1229
+ sha256: f7b63ef50f1b690dddf550d03497b66d609393b40b564ed0d674909a68ebf16a
1230
+ requires_dist:
1231
+ - click>=5.0 ; extra == 'cli'
1232
+ requires_python: '>=3.8'
1233
+ - pypi: https://files.pythonhosted.org/packages/94/df/e1584757c736c4fba09a3fb4f22fe625cc3367b06c6ece221e4b8c1e3023/pyttsx3-2.98-py3-none-any.whl
1234
+ name: pyttsx3
1235
+ version: '2.98'
1236
+ sha256: b3fb4ca4d5ae4f8e6836d6b37bf5fee0fd51d157ffa27fb9064be6e7be3da37a
1237
+ requires_dist:
1238
+ - pyobjc>=2.4 ; platform_system == 'Darwin'
1239
+ - comtypes ; platform_system == 'Windows'
1240
+ - pypiwin32 ; platform_system == 'Windows'
1241
+ - pywin32 ; platform_system == 'Windows'
1242
+ - pypi: https://files.pythonhosted.org/packages/21/27/0c8811fbc3ca188f93b5354e7c286eb91f80a53afa4e11007ef661afa746/pywin32-308-cp312-cp312-win_amd64.whl
1243
+ name: pywin32
1244
+ version: '308'
1245
+ sha256: 00b3e11ef09ede56c6a43c71f2d31857cf7c54b0ab6e78ac659497abd2834f47
1246
+ - pypi: https://files.pythonhosted.org/packages/0c/e8/4f648c598b17c3d06e8753d7d13d57542b30d56e6c2dedf9c331ae56312e/PyYAML-6.0.2-cp312-cp312-win_amd64.whl
1247
+ name: pyyaml
1248
+ version: 6.0.2
1249
+ sha256: 7e7401d0de89a9a855c839bc697c079a4af81cf878373abd7dc625847d25cbd8
1250
+ requires_python: '>=3.8'
1251
+ - pypi: https://files.pythonhosted.org/packages/b9/2b/614b4752f2e127db5cc206abc23a8c19678e92b23c3db30fc86ab731d3bd/PyYAML-6.0.2-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
1252
+ name: pyyaml
1253
+ version: 6.0.2
1254
+ sha256: 80bab7bfc629882493af4aa31a4cfa43a4c57c83813253626916b8c7ada83476
1255
+ requires_python: '>=3.8'
1256
+ - conda: https://conda.anaconda.org/conda-forge/linux-64/readline-8.2-h8228510_1.conda
1257
+ sha256: 5435cf39d039387fbdc977b0a762357ea909a7694d9528ab40f005e9208744d7
1258
+ md5: 47d31b792659ce70f470b5c82fdfb7a4
1259
+ depends:
1260
+ - libgcc-ng >=12
1261
+ - ncurses >=6.3,<7.0a0
1262
+ license: GPL-3.0-only
1263
+ license_family: GPL
1264
+ purls: []
1265
+ size: 281456
1266
+ timestamp: 1679532220005
1267
+ - pypi: https://files.pythonhosted.org/packages/f9/9b/335f9764261e915ed497fcdeb11df5dfd6f7bf257d4a6a2a686d80da4d54/requests-2.32.3-py3-none-any.whl
1268
+ name: requests
1269
+ version: 2.32.3
1270
+ sha256: 70761cfe03c773ceb22aa2f671b4757976145175cdfca038c02654d061d6dcc6
1271
+ requires_dist:
1272
+ - charset-normalizer>=2,<4
1273
+ - idna>=2.5,<4
1274
+ - urllib3>=1.21.1,<3
1275
+ - certifi>=2017.4.17
1276
+ - pysocks>=1.5.6,!=1.5.7 ; extra == 'socks'
1277
+ - chardet>=3.0.2,<6 ; extra == 'use-chardet-on-py3'
1278
+ requires_python: '>=3.8'
1279
+ - pypi: https://files.pythonhosted.org/packages/04/70/e59c192a3ad476355e7f45fb3a87326f5219cc7c472e6b040c6c6595c8f0/ruff-0.9.5-py3-none-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
1280
+ name: ruff
1281
+ version: 0.9.5
1282
+ sha256: 2c746d7d1df64f31d90503ece5cc34d7007c06751a7a3bbeee10e5f2463d52d2
1283
+ requires_python: '>=3.7'
1284
+ - pypi: https://files.pythonhosted.org/packages/b7/ad/c7a900591bd152bb47fc4882a27654ea55c7973e6d5d6396298ad3fd6638/ruff-0.9.5-py3-none-win_amd64.whl
1285
+ name: ruff
1286
+ version: 0.9.5
1287
+ sha256: 78cc6067f6d80b6745b67498fb84e87d32c6fc34992b52bffefbdae3442967d6
1288
+ requires_python: '>=3.7'
1289
+ - pypi: https://files.pythonhosted.org/packages/b0/3c/0de11ca154e24a57b579fb648151d901326d3102115bc4f9a7a86526ce54/scipy-1.15.1-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
1290
+ name: scipy
1291
+ version: 1.15.1
1292
+ sha256: 0fb57b30f0017d4afa5fe5f5b150b8f807618819287c21cbe51130de7ccdaed2
1293
+ requires_dist:
1294
+ - numpy>=1.23.5,<2.5
1295
+ - pytest ; extra == 'test'
1296
+ - pytest-cov ; extra == 'test'
1297
+ - pytest-timeout ; extra == 'test'
1298
+ - pytest-xdist ; extra == 'test'
1299
+ - asv ; extra == 'test'
1300
+ - mpmath ; extra == 'test'
1301
+ - gmpy2 ; extra == 'test'
1302
+ - threadpoolctl ; extra == 'test'
1303
+ - scikit-umfpack ; extra == 'test'
1304
+ - pooch ; extra == 'test'
1305
+ - hypothesis>=6.30 ; extra == 'test'
1306
+ - array-api-strict>=2.0,<2.1.1 ; extra == 'test'
1307
+ - cython ; extra == 'test'
1308
+ - meson ; extra == 'test'
1309
+ - ninja ; sys_platform != 'emscripten' and extra == 'test'
1310
+ - sphinx>=5.0.0,<8.0.0 ; extra == 'doc'
1311
+ - intersphinx-registry ; extra == 'doc'
1312
+ - pydata-sphinx-theme>=0.15.2 ; extra == 'doc'
1313
+ - sphinx-copybutton ; extra == 'doc'
1314
+ - sphinx-design>=0.4.0 ; extra == 'doc'
1315
+ - matplotlib>=3.5 ; extra == 'doc'
1316
+ - numpydoc ; extra == 'doc'
1317
+ - jupytext ; extra == 'doc'
1318
+ - myst-nb ; extra == 'doc'
1319
+ - pooch ; extra == 'doc'
1320
+ - jupyterlite-sphinx>=0.16.5 ; extra == 'doc'
1321
+ - jupyterlite-pyodide-kernel ; extra == 'doc'
1322
+ - mypy==1.10.0 ; extra == 'dev'
1323
+ - typing-extensions ; extra == 'dev'
1324
+ - types-psutil ; extra == 'dev'
1325
+ - pycodestyle ; extra == 'dev'
1326
+ - ruff>=0.0.292 ; extra == 'dev'
1327
+ - cython-lint>=0.12.2 ; extra == 'dev'
1328
+ - rich-click ; extra == 'dev'
1329
+ - doit>=0.36.0 ; extra == 'dev'
1330
+ - pydevtool ; extra == 'dev'
1331
+ requires_python: '>=3.10'
1332
+ - pypi: https://files.pythonhosted.org/packages/ff/ba/31c7a8131152822b3a2cdeba76398ffb404d81d640de98287d236da90c49/scipy-1.15.1-cp312-cp312-win_amd64.whl
1333
+ name: scipy
1334
+ version: 1.15.1
1335
+ sha256: 900f3fa3db87257510f011c292a5779eb627043dd89731b9c461cd16ef76ab3d
1336
+ requires_dist:
1337
+ - numpy>=1.23.5,<2.5
1338
+ - pytest ; extra == 'test'
1339
+ - pytest-cov ; extra == 'test'
1340
+ - pytest-timeout ; extra == 'test'
1341
+ - pytest-xdist ; extra == 'test'
1342
+ - asv ; extra == 'test'
1343
+ - mpmath ; extra == 'test'
1344
+ - gmpy2 ; extra == 'test'
1345
+ - threadpoolctl ; extra == 'test'
1346
+ - scikit-umfpack ; extra == 'test'
1347
+ - pooch ; extra == 'test'
1348
+ - hypothesis>=6.30 ; extra == 'test'
1349
+ - array-api-strict>=2.0,<2.1.1 ; extra == 'test'
1350
+ - cython ; extra == 'test'
1351
+ - meson ; extra == 'test'
1352
+ - ninja ; sys_platform != 'emscripten' and extra == 'test'
1353
+ - sphinx>=5.0.0,<8.0.0 ; extra == 'doc'
1354
+ - intersphinx-registry ; extra == 'doc'
1355
+ - pydata-sphinx-theme>=0.15.2 ; extra == 'doc'
1356
+ - sphinx-copybutton ; extra == 'doc'
1357
+ - sphinx-design>=0.4.0 ; extra == 'doc'
1358
+ - matplotlib>=3.5 ; extra == 'doc'
1359
+ - numpydoc ; extra == 'doc'
1360
+ - jupytext ; extra == 'doc'
1361
+ - myst-nb ; extra == 'doc'
1362
+ - pooch ; extra == 'doc'
1363
+ - jupyterlite-sphinx>=0.16.5 ; extra == 'doc'
1364
+ - jupyterlite-pyodide-kernel ; extra == 'doc'
1365
+ - mypy==1.10.0 ; extra == 'dev'
1366
+ - typing-extensions ; extra == 'dev'
1367
+ - types-psutil ; extra == 'dev'
1368
+ - pycodestyle ; extra == 'dev'
1369
+ - ruff>=0.0.292 ; extra == 'dev'
1370
+ - cython-lint>=0.12.2 ; extra == 'dev'
1371
+ - rich-click ; extra == 'dev'
1372
+ - doit>=0.36.0 ; extra == 'dev'
1373
+ - pydevtool ; extra == 'dev'
1374
+ requires_python: '>=3.10'
1375
+ - pypi: https://files.pythonhosted.org/packages/32/7a/1e9a31a5d07d1d3ed53f9cca128133f52fb898cc49196fe0a66a0b056c2d/sherpa_onnx-1.10.43-cp312-cp312-win_amd64.whl
1376
+ name: sherpa-onnx
1377
+ version: 1.10.43
1378
+ sha256: 9a11a6f9f505a3ccfc210397271e82b3acc93943a3000316c464feed10edd8b4
1379
+ requires_python: '>=3.6'
1380
+ - pypi: https://files.pythonhosted.org/packages/48/77/a3771191d4bac619df7dc06db14a7b22dd0007548b71ee54a81f80e2d219/sherpa_onnx-1.10.43-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
1381
+ name: sherpa-onnx
1382
+ version: 1.10.43
1383
+ sha256: 324cb6f678575c3d4b486ac52852b0286d6d37f37ef59f5bb22da981d79b2c8c
1384
+ requires_python: '>=3.6'
1385
+ - pypi: https://files.pythonhosted.org/packages/b7/ce/149a00dd41f10bc29e5921b496af8b574d8413afcd5e30dfa0ed46c2cc5e/six-1.17.0-py2.py3-none-any.whl
1386
+ name: six
1387
+ version: 1.17.0
1388
+ sha256: 4721f391ed90541fddacab5acf947aa0d3dc7d27b2e1e8eda2be8970586c3274
1389
+ requires_python: '>=2.7,!=3.0.*,!=3.1.*,!=3.2.*'
1390
+ - pypi: https://files.pythonhosted.org/packages/e9/44/75a9c9421471a6c4805dbf2356f7c181a29c1879239abab1ea2cc8f38b40/sniffio-1.3.1-py3-none-any.whl
1391
+ name: sniffio
1392
+ version: 1.3.1
1393
+ sha256: 2f6da418d1f1e0fddd844478f41680e794e6051915791a034ff65e5f100525a2
1394
+ requires_python: '>=3.7'
1395
+ - pypi: https://files.pythonhosted.org/packages/14/e9/6b761de83277f2f02ded7e7ea6f07828ec78e4b229b80e4ca55dd205b9dc/soundfile-0.13.1-py2.py3-none-win_amd64.whl
1396
+ name: soundfile
1397
+ version: 0.13.1
1398
+ sha256: 1e70a05a0626524a69e9f0f4dd2ec174b4e9567f4d8b6c11d38b5c289be36ee9
1399
+ requires_dist:
1400
+ - cffi>=1.0
1401
+ - numpy
1402
+ - pypi: https://files.pythonhosted.org/packages/57/5e/70bdd9579b35003a489fc850b5047beeda26328053ebadc1fb60f320f7db/soundfile-0.13.1-py2.py3-none-manylinux_2_28_x86_64.whl
1403
+ name: soundfile
1404
+ version: 0.13.1
1405
+ sha256: 03267c4e493315294834a0870f31dbb3b28a95561b80b134f0bd3cf2d5f0e618
1406
+ requires_dist:
1407
+ - cffi>=1.0
1408
+ - numpy
1409
+ - pypi: https://files.pythonhosted.org/packages/66/b7/4a1bc231e0681ebf339337b0cd05b91dc6a0d701fa852bb812e244b7a030/srt-3.5.3.tar.gz
1410
+ name: srt
1411
+ version: 3.5.3
1412
+ sha256: 4884315043a4f0740fd1f878ed6caa376ac06d70e135f306a6dc44632eed0cc0
1413
+ requires_python: '>=2.7'
1414
+ - pypi: https://files.pythonhosted.org/packages/d9/61/f2b52e107b1fc8944b33ef56bf6ac4ebbe16d91b94d2b87ce013bf63fb84/starlette-0.45.3-py3-none-any.whl
1415
+ name: starlette
1416
+ version: 0.45.3
1417
+ sha256: dfb6d332576f136ec740296c7e8bb8c8a7125044e7c6da30744718880cdd059d
1418
+ requires_dist:
1419
+ - anyio>=3.6.2,<5
1420
+ - typing-extensions>=3.10.0 ; python_full_version < '3.10'
1421
+ - httpx>=0.27.0,<0.29.0 ; extra == 'full'
1422
+ - itsdangerous ; extra == 'full'
1423
+ - jinja2 ; extra == 'full'
1424
+ - python-multipart>=0.0.18 ; extra == 'full'
1425
+ - pyyaml ; extra == 'full'
1426
+ requires_python: '>=3.9'
1427
+ - pypi: https://files.pythonhosted.org/packages/99/ff/c87e0622b1dadea79d2fb0b25ade9ed98954c9033722eb707053d310d4f3/sympy-1.13.3-py3-none-any.whl
1428
+ name: sympy
1429
+ version: 1.13.3
1430
+ sha256: 54612cf55a62755ee71824ce692986f23c88ffa77207b30c1368eda4a7060f73
1431
+ requires_dist:
1432
+ - mpmath>=1.1.0,<1.4
1433
+ - pytest>=7.1.0 ; extra == 'dev'
1434
+ - hypothesis>=6.70.0 ; extra == 'dev'
1435
+ requires_python: '>=3.8'
1436
+ - pypi: https://files.pythonhosted.org/packages/40/44/4a5f08c96eb108af5cb50b41f76142f0afa346dfa99d5296fe7202a11854/tabulate-0.9.0-py3-none-any.whl
1437
+ name: tabulate
1438
+ version: 0.9.0
1439
+ sha256: 024ca478df22e9340661486f85298cff5f6dcdba14f3813e8830015b9ed1948f
1440
+ requires_dist:
1441
+ - wcwidth ; extra == 'widechars'
1442
+ requires_python: '>=3.7'
1443
+ - conda: https://conda.anaconda.org/conda-forge/linux-64/tk-8.6.13-noxft_h4845f30_101.conda
1444
+ sha256: e0569c9caa68bf476bead1bed3d79650bb080b532c64a4af7d8ca286c08dea4e
1445
+ md5: d453b98d9c83e71da0741bb0ff4d76bc
1446
+ depends:
1447
+ - libgcc-ng >=12
1448
+ - libzlib >=1.2.13,<2.0.0a0
1449
+ license: TCL
1450
+ license_family: BSD
1451
+ purls: []
1452
+ size: 3318875
1453
+ timestamp: 1699202167581
1454
+ - conda: https://conda.anaconda.org/conda-forge/win-64/tk-8.6.13-h5226925_1.conda
1455
+ sha256: 2c4e914f521ccb2718946645108c9bd3fc3216ba69aea20c2c3cedbd8db32bb1
1456
+ md5: fc048363eb8f03cd1737600a5d08aafe
1457
+ depends:
1458
+ - ucrt >=10.0.20348.0
1459
+ - vc >=14.2,<15
1460
+ - vc14_runtime >=14.29.30139
1461
+ license: TCL
1462
+ license_family: BSD
1463
+ purls: []
1464
+ size: 3503410
1465
+ timestamp: 1699202577803
1466
+ - pypi: https://files.pythonhosted.org/packages/5c/51/51c3f2884d7bab89af25f678447ea7d297b53b5a3b5730a7cb2ef6069f07/tomli-2.2.1-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
1467
+ name: tomli
1468
+ version: 2.2.1
1469
+ sha256: db2b95f9de79181805df90bedc5a5ab4c165e6ec3fe99f970d0e302f384ad222
1470
+ requires_python: '>=3.8'
1471
+ - pypi: https://files.pythonhosted.org/packages/ef/60/9b9638f081c6f1261e2688bd487625cd1e660d0a85bd469e91d8db969734/tomli-2.2.1-cp312-cp312-win_amd64.whl
1472
+ name: tomli
1473
+ version: 2.2.1
1474
+ sha256: 7fc04e92e1d624a4a63c76474610238576942d6b8950a2d7f908a340494e67e4
1475
+ requires_python: '>=3.8'
1476
+ - pypi: https://files.pythonhosted.org/packages/d0/30/dc54f88dd4a2b5dc8a0279bdd7270e735851848b762aeb1c1184ed1f6b14/tqdm-4.67.1-py3-none-any.whl
1477
+ name: tqdm
1478
+ version: 4.67.1
1479
+ sha256: 26445eca388f82e72884e0d580d5464cd801a3ea01e63e5601bdff9ba6a48de2
1480
+ requires_dist:
1481
+ - colorama ; platform_system == 'Windows'
1482
+ - pytest>=6 ; extra == 'dev'
1483
+ - pytest-cov ; extra == 'dev'
1484
+ - pytest-timeout ; extra == 'dev'
1485
+ - pytest-asyncio>=0.24 ; extra == 'dev'
1486
+ - nbval ; extra == 'dev'
1487
+ - requests ; extra == 'discord'
1488
+ - slack-sdk ; extra == 'slack'
1489
+ - requests ; extra == 'telegram'
1490
+ - ipywidgets>=6 ; extra == 'notebook'
1491
+ requires_python: '>=3.7'
1492
+ - pypi: https://files.pythonhosted.org/packages/26/9f/ad63fc0248c5379346306f8668cda6e2e2e9c95e01216d2b8ffd9ff037d0/typing_extensions-4.12.2-py3-none-any.whl
1493
+ name: typing-extensions
1494
+ version: 4.12.2
1495
+ sha256: 04e5ca0351e0f3f85c6853954072df659d0d13fac324d0072316b67d7794700d
1496
+ requires_python: '>=3.8'
1497
+ - conda: https://conda.anaconda.org/conda-forge/noarch/tzdata-2025a-h78e105d_0.conda
1498
+ sha256: c4b1ae8a2931fe9b274c44af29c5475a85b37693999f8c792dad0f8c6734b1de
1499
+ md5: dbcace4706afdfb7eb891f7b37d07c04
1500
+ license: LicenseRef-Public-Domain
1501
+ purls: []
1502
+ size: 122921
1503
+ timestamp: 1737119101255
1504
+ - conda: https://conda.anaconda.org/conda-forge/win-64/ucrt-10.0.22621.0-h57928b3_1.conda
1505
+ sha256: db8dead3dd30fb1a032737554ce91e2819b43496a0db09927edf01c32b577450
1506
+ md5: 6797b005cd0f439c4c5c9ac565783700
1507
+ constrains:
1508
+ - vs2015_runtime >=14.29.30037
1509
+ license: LicenseRef-MicrosoftWindowsSDK10
1510
+ purls: []
1511
+ size: 559710
1512
+ timestamp: 1728377334097
1513
+ - pypi: https://files.pythonhosted.org/packages/c8/19/4ec628951a74043532ca2cf5d97b7b14863931476d117c471e8e2b1eb39f/urllib3-2.3.0-py3-none-any.whl
1514
+ name: urllib3
1515
+ version: 2.3.0
1516
+ sha256: 1cee9ad369867bfdbbb48b7dd50374c0967a0bb7710050facf0dd6911440e3df
1517
+ requires_dist:
1518
+ - brotli>=1.0.9 ; platform_python_implementation == 'CPython' and extra == 'brotli'
1519
+ - brotlicffi>=0.8.0 ; platform_python_implementation != 'CPython' and extra == 'brotli'
1520
+ - h2>=4,<5 ; extra == 'h2'
1521
+ - pysocks>=1.5.6,!=1.5.7,<2.0 ; extra == 'socks'
1522
+ - zstandard>=0.18.0 ; extra == 'zstd'
1523
+ requires_python: '>=3.9'
1524
+ - pypi: https://files.pythonhosted.org/packages/61/14/33a3a1352cfa71812a3a21e8c9bfb83f60b0011f5e36f2b1399d51928209/uvicorn-0.34.0-py3-none-any.whl
1525
+ name: uvicorn
1526
+ version: 0.34.0
1527
+ sha256: 023dc038422502fa28a09c7a30bf2b6991512da7dcdb8fd35fe57cfc154126f4
1528
+ requires_dist:
1529
+ - click>=7.0
1530
+ - h11>=0.8
1531
+ - typing-extensions>=4.0 ; python_full_version < '3.11'
1532
+ - colorama>=0.4 ; sys_platform == 'win32' and extra == 'standard'
1533
+ - httptools>=0.6.3 ; extra == 'standard'
1534
+ - python-dotenv>=0.13 ; extra == 'standard'
1535
+ - pyyaml>=5.1 ; extra == 'standard'
1536
+ - uvloop>=0.14.0,!=0.15.0,!=0.15.1 ; platform_python_implementation != 'PyPy' and sys_platform != 'cygwin' and sys_platform != 'win32' and extra == 'standard'
1537
+ - watchfiles>=0.13 ; extra == 'standard'
1538
+ - websockets>=10.4 ; extra == 'standard'
1539
+ requires_python: '>=3.9'
1540
+ - pypi: https://files.pythonhosted.org/packages/06/a7/b4e6a19925c900be9f98bec0a75e6e8f79bb53bdeb891916609ab3958967/uvloop-0.21.0-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
1541
+ name: uvloop
1542
+ version: 0.21.0
1543
+ sha256: 86975dca1c773a2c9864f4c52c5a55631038e387b47eaf56210f873887b6c8dc
1544
+ requires_dist:
1545
+ - setuptools>=60 ; extra == 'dev'
1546
+ - cython~=3.0 ; extra == 'dev'
1547
+ - sphinx~=4.1.2 ; extra == 'docs'
1548
+ - sphinxcontrib-asyncio~=0.3.0 ; extra == 'docs'
1549
+ - sphinx-rtd-theme~=0.5.2 ; extra == 'docs'
1550
+ - aiohttp>=3.10.5 ; extra == 'test'
1551
+ - flake8~=5.0 ; extra == 'test'
1552
+ - psutil ; extra == 'test'
1553
+ - pycodestyle~=2.9.0 ; extra == 'test'
1554
+ - pyopenssl~=23.0.0 ; extra == 'test'
1555
+ - mypy>=0.800 ; extra == 'test'
1556
+ requires_python: '>=3.8.0'
1557
+ - conda: https://conda.anaconda.org/conda-forge/win-64/vc-14.3-h5fd82a7_24.conda
1558
+ sha256: 7ce178cf139ccea5079f9c353b3d8415d1d49b0a2f774662c355d3f89163d7b4
1559
+ md5: 00cf3a61562bd53bd5ea99e6888793d0
1560
+ depends:
1561
+ - vc14_runtime >=14.40.33810
1562
+ track_features:
1563
+ - vc14
1564
+ license: BSD-3-Clause
1565
+ license_family: BSD
1566
+ purls: []
1567
+ size: 17693
1568
+ timestamp: 1737627189024
1569
+ - conda: https://conda.anaconda.org/conda-forge/win-64/vc14_runtime-14.42.34433-h6356254_24.conda
1570
+ sha256: abda97b8728cf6e3c37df8f1178adde7219bed38b96e392cb3be66336386d32e
1571
+ md5: 2441e010ee255e6a38bf16705a756e94
1572
+ depends:
1573
+ - ucrt >=10.0.20348.0
1574
+ constrains:
1575
+ - vs2015_runtime 14.42.34433.* *_24
1576
+ license: LicenseRef-MicrosoftVisualCpp2015-2022Runtime
1577
+ license_family: Proprietary
1578
+ purls: []
1579
+ size: 753531
1580
+ timestamp: 1737627061911
1581
+ - conda: https://conda.anaconda.org/conda-forge/win-64/vs2015_runtime-14.42.34433-hfef2bbc_24.conda
1582
+ sha256: 09102e0bd283af65772c052d85028410b0c31989b3cd96c260485d28e270836e
1583
+ md5: 117fcc5b86c48f3b322b0722258c7259
1584
+ depends:
1585
+ - vc14_runtime >=14.42.34433
1586
+ license: BSD-3-Clause
1587
+ license_family: BSD
1588
+ purls: []
1589
+ size: 17669
1590
+ timestamp: 1737627066773
1591
+ - pypi: https://files.pythonhosted.org/packages/2b/b4/9396cc61b948ef18943e7c85ecfa64cf940c88977d882da57147f62b34b1/watchfiles-1.0.4-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
1592
+ name: watchfiles
1593
+ version: 1.0.4
1594
+ sha256: 5c11ea22304d17d4385067588123658e9f23159225a27b983f343fcffc3e796a
1595
+ requires_dist:
1596
+ - anyio>=3.0.0
1597
+ requires_python: '>=3.9'
1598
+ - pypi: https://files.pythonhosted.org/packages/ea/94/b0165481bff99a64b29e46e07ac2e0df9f7a957ef13bec4ceab8515f44e3/watchfiles-1.0.4-cp312-cp312-win_amd64.whl
1599
+ name: watchfiles
1600
+ version: 1.0.4
1601
+ sha256: c2acfa49dd0ad0bf2a9c0bb9a985af02e89345a7189be1efc6baa085e0f72d7c
1602
+ requires_dist:
1603
+ - anyio>=3.0.0
1604
+ requires_python: '>=3.9'
1605
+ - pypi: https://files.pythonhosted.org/packages/5a/84/44687a29792a70e111c5c477230a72c4b957d88d16141199bf9acb7537a3/websocket_client-1.8.0-py3-none-any.whl
1606
+ name: websocket-client
1607
+ version: 1.8.0
1608
+ sha256: 17b44cc997f5c498e809b22cdf2d9c7a9e71c02c8cc2b6c56e7c2d1239bfa526
1609
+ requires_dist:
1610
+ - sphinx>=6.0 ; extra == 'docs'
1611
+ - sphinx-rtd-theme>=1.1.0 ; extra == 'docs'
1612
+ - myst-parser>=2.0.0 ; extra == 'docs'
1613
+ - python-socks ; extra == 'optional'
1614
+ - wsaccel ; extra == 'optional'
1615
+ - websockets ; extra == 'test'
1616
+ requires_python: '>=3.8'
1617
+ - pypi: https://files.pythonhosted.org/packages/81/da/72f7caabd94652e6eb7e92ed2d3da818626e70b4f2b15a854ef60bf501ec/websockets-14.2-cp312-cp312-manylinux_2_5_x86_64.manylinux1_x86_64.manylinux_2_17_x86_64.manylinux2014_x86_64.whl
1618
+ name: websockets
1619
+ version: '14.2'
1620
+ sha256: a39d7eceeea35db85b85e1169011bb4321c32e673920ae9c1b6e0978590012a3
1621
+ requires_python: '>=3.9'
1622
+ - pypi: https://files.pythonhosted.org/packages/b3/7d/32cdb77990b3bdc34a306e0a0f73a1275221e9a66d869f6ff833c95b56ef/websockets-14.2-cp312-cp312-win_amd64.whl
1623
+ name: websockets
1624
+ version: '14.2'
1625
+ sha256: 44bba1a956c2c9d268bdcdf234d5e5ff4c9b6dc3e300545cbe99af59dda9dcce
1626
+ requires_python: '>=3.9'
1627
+ - pypi: https://files.pythonhosted.org/packages/e1/07/c6fe3ad3e685340704d314d765b7912993bcb8dc198f0e7a89382d37974b/win32_setctime-1.2.0-py3-none-any.whl
1628
+ name: win32-setctime
1629
+ version: 1.2.0
1630
+ sha256: 95d644c4e708aba81dc3704a116d8cbc974d70b3bdb8be1d150e36be6e9d1390
1631
+ requires_dist:
1632
+ - black>=19.3b0 ; python_full_version >= '3.6' and extra == 'dev'
1633
+ - pytest>=4.6.2 ; extra == 'dev'
1634
+ requires_python: '>=3.5'
1635
+ - pypi: https://files.pythonhosted.org/packages/1a/e1/a097d5755d3ea8479a42856f51d97eeff7a3a7160593332d98f2709b3580/yarl-1.18.3-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
1636
+ name: yarl
1637
+ version: 1.18.3
1638
+ sha256: 00e5a1fea0fd4f5bfa7440a47eff01d9822a65b4488f7cff83155a0f31a2ecba
1639
+ requires_dist:
1640
+ - idna>=2.0
1641
+ - multidict>=4.0
1642
+ - propcache>=0.2.0
1643
+ requires_python: '>=3.9'
1644
+ - pypi: https://files.pythonhosted.org/packages/34/45/0e055320daaabfc169b21ff6174567b2c910c45617b0d79c68d7ab349b02/yarl-1.18.3-cp312-cp312-win_amd64.whl
1645
+ name: yarl
1646
+ version: 1.18.3
1647
+ sha256: 7e2ee16578af3b52ac2f334c3b1f92262f47e02cc6193c598502bd46f5cd1477
1648
+ requires_dist:
1649
+ - idna>=2.0
1650
+ - multidict>=4.0
1651
+ - propcache>=0.2.0
1652
+ requires_python: '>=3.9'
prompts/README.md ADDED
@@ -0,0 +1,17 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Prompts
2
+
3
+ This directory contains utility prompts used in the Open-LLM-VTuber project. These are general-purpose prompts that are not specific to any character's persona.
4
+
5
+ ## Examples of Utility Prompts
6
+
7
+ * **Live2D Expressions:** Prompts that inform the LLM about available Live2D expressions.
8
+ * **Tool Usage:** Prompts that guide the LLM on how to use available tools.
9
+ * ... and many more.
10
+
11
+ ## Character Persona Prompts
12
+
13
+ **Important:** Character persona prompts (the prompts that define the personality of your AI characters) are **NOT** stored in this directory.
14
+
15
+ They are located in:
16
+ * Your main `conf.yaml` file.
17
+ * The YAML files within the `characters/` directory if you are defining multiple characters.
prompts/__init__.py ADDED
File without changes
prompts/prompt_loader.py ADDED
@@ -0,0 +1,74 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import os
2
+ import chardet
3
+ from loguru import logger
4
+
5
+ current_dir = os.path.dirname(os.path.abspath(__file__))
6
+
7
+ PROMPT_DIR = current_dir
8
+ PERSONA_PROMPT_DIR = os.path.join(PROMPT_DIR, "persona")
9
+ UTIL_PROMPT_DIR = os.path.join(PROMPT_DIR, "utils")
10
+
11
+
12
+ def _load_file_content(file_path: str) -> str:
13
+ """
14
+ Load the content of a file with robust encoding handling.
15
+
16
+ Args:
17
+ file_path: Path to the file to load
18
+
19
+ Returns:
20
+ str: Content of the file
21
+
22
+ Raises:
23
+ FileNotFoundError: If the file doesn't exist
24
+ UnicodeError: If the file cannot be decoded with any attempted encoding
25
+ """
26
+ if not os.path.exists(file_path):
27
+ raise FileNotFoundError(f"File not found: {file_path}")
28
+
29
+ # Try common encodings first
30
+ encodings = ["utf-8", "utf-8-sig", "gbk", "gb2312", "ascii"]
31
+
32
+ for encoding in encodings:
33
+ try:
34
+ with open(file_path, "r", encoding=encoding) as file:
35
+ return file.read()
36
+ except UnicodeDecodeError:
37
+ continue
38
+
39
+ # If all common encodings fail, try to detect encoding
40
+ try:
41
+ with open(file_path, "rb") as file:
42
+ raw_data = file.read()
43
+ detected = chardet.detect(raw_data)
44
+ detected_encoding = detected["encoding"]
45
+
46
+ if detected_encoding:
47
+ try:
48
+ return raw_data.decode(detected_encoding)
49
+ except UnicodeDecodeError:
50
+ pass
51
+ except Exception as e:
52
+ logger.error(f"Error detecting encoding for {file_path}: {e}")
53
+
54
+ raise UnicodeError(f"Failed to decode {file_path} with any encoding")
55
+
56
+
57
+ def load_persona(persona_name: str) -> str:
58
+ """Load the content of a specific persona prompt file."""
59
+ persona_file_path = os.path.join(PERSONA_PROMPT_DIR, f"{persona_name}.txt")
60
+ try:
61
+ return _load_file_content(persona_file_path)
62
+ except Exception as e:
63
+ logger.error(f"Error loading persona {persona_name}: {e}")
64
+ raise
65
+
66
+
67
+ def load_util(util_name: str) -> str:
68
+ """Load the content of a specific utility prompt file."""
69
+ util_file_path = os.path.join(UTIL_PROMPT_DIR, f"{util_name}.txt")
70
+ try:
71
+ return _load_file_content(util_file_path)
72
+ except Exception as e:
73
+ logger.error(f"Error loading util {util_name}: {e}")
74
+ raise
prompts/utils/concise_style_prompt.txt ADDED
@@ -0,0 +1,18 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Dialogue Protocol
2
+
3
+ [Response Guidelines]
4
+ - Keep responses brief and focused (1-2 sentences)
5
+ - Balance core message with engagement elements
6
+ - Use natural, flowing language
7
+ - Maintain concise sentence structure
8
+
9
+ [Flow Requirements]
10
+ - Favor questions over statements
11
+ - Include contextual follow-ups
12
+ - Keep exchanges dynamic
13
+
14
+ [Style Rules]
15
+ - Avoid lengthy monologues
16
+ - No consecutive statements without engagement
17
+ - Skip complex qualifying phrases
18
+ - Use simple sentence structures
prompts/utils/group_conversation_prompt.txt ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ Now you are in a group conversation.
2
+ The human participant is {human_name}.
3
+ The other AI participants are: {other_ais}.
4
+ Avoid using `:` to indicate your response. Just speak naturally.
5
+ You are free to address other AI participants.
6
+ Try to vary between short and long responses to allow others to interact.
7
+ Be proactive in finding interesting topics to make the conversation lively and fun.
prompts/utils/live2d_expression_prompt.txt ADDED
@@ -0,0 +1,14 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ## Expressions
2
+ In your response, use the keywords provided below to express facial expressions or perform actions with your Live2D body.
3
+
4
+ Here are all the expression keywords you can use. Use them regularly:
5
+ - [<insert_emomap_keys>]
6
+
7
+ ## Examples
8
+ Here are some examples of how to use expressions in your responses:
9
+
10
+ "Hi! [expression1] Nice to meet you!"
11
+
12
+ "[expression2] That's a great question! [expression3] Let me explain..."
13
+
14
+ Note: you are only allowed to use the keywords explicity listed above. Don't use keywords unlisted above. Remember to include the brackets `[]`
prompts/utils/live_prompt.txt ADDED
@@ -0,0 +1,9 @@
 
 
 
 
 
 
 
 
 
 
1
+ You are a live streaming virtual assistant. Your inputs are chat messages (danmaku) from viewers. Keep these in mind:
2
+
3
+ - Engage with viewers directly and enthusiastically
4
+ - Keep responses entertaining and concise
5
+ - Acknowledge viewers' comments and questions
6
+ - Maintain a friendly, welcoming atmosphere
7
+ - Remember you're in a live environment - be natural and responsive
8
+
9
+ Your goal is to create a fun, engaging live experience!
prompts/utils/mcp_prompt.txt ADDED
@@ -0,0 +1,36 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+ ## *MCP Tools Capability Section*
3
+
4
+ **MCP (Model Context Protocol)** enables you to interact with specialized tools, grouped under distinct **MCP Servers**, each serving a specific function.
5
+
6
+ You have access to the following MCP Servers and their tools:
7
+
8
+ ```
9
+ [<insert_mcp_servers_with_tools>]
10
+ ```
11
+
12
+ ### Tool Usage Instructions:
13
+
14
+ - Analyze the user's input to decide whether a tool is required.
15
+ - If **no tool is needed**, skip this entire MCP section and respond normally in accordance with your personality.
16
+ - If a **tool is needed**, the JSON object should be placed before you say anything else. Also, the tool use response should be a dedicated response, where you respond **only** with the JSON object shown below — **so do not include what you normally say when you are including the JSON object in your response**. You will go back to normal conversation once the result of the tool call is returned to you.
17
+
18
+ ### JSON Response Format:
19
+ {
20
+ "mcp_server": "<mcp_server_name>"
21
+ "tool": "<tool_name>",
22
+ "arguments": {
23
+ "<argument1_name>": <value>,
24
+ "<argument2_name>": <value>
25
+ }
26
+ }
27
+
28
+ ### Critical Rules:
29
+ - Only replace values inside `< >`.
30
+ - Do **not** change the JSON format or add extra explanation.
31
+ - Include all mandatory arguments as defined by the selected tool.
32
+ - When calling the tool, the tool calling response should be a dedicated tool call that only includes the JSON and nothing else. You will be able to talk normally after the tool call results came back to you.
33
+
34
+ ### Post-Tool Behavior:
35
+ Once a tool is used and a response is received:
36
+ - Resume the conversation, factoring in the tool's output, your AI character’s personality, and the context.
prompts/utils/proactive_speak_prompt.txt ADDED
@@ -0,0 +1 @@
 
 
1
+ Please say something that would be engaging and appropriate for the current context.
prompts/utils/speakable_prompt.txt ADDED
@@ -0,0 +1,13 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ You speak all output aloud to the user, so tailor responses as spoken words for voice conversations. Never output things that are not spoken, like text-specific formatting.
2
+
3
+ Convert all text to easily speakable words, following the guidelines below.
4
+
5
+ - Numbers: Spell out fully (three hundred forty-two,two million, five hundred sixty seven thousand, eight hundred and ninety). Negatives: Say negative before the number. Decimals: Use point (three point one four). Fractions: spell out (three fourths)
6
+ - Alphanumeric strings: Break into 3-4 character chunks, spell all non-letters (ABC123XYZ becomes A B C one two three X Y Z)
7
+ - Phone numbers: Use words (550-120-4567 becomes five five zero, one two zero, four five six seven)
8
+ - Dates: Spell month, use ordinals for days, full year (11/5/1991 becomes November fifth, nineteen ninety-one)
9
+ - Time: Use oh for single-digit hours, state AM/PM (9:05 PM becomes nine oh five PM)
10
+ - Math: Describe operations clearly (5x^2 + 3x - 2 becomes five X squared plus three X minus two)
11
+ - Currencies: Spell out as full words ($50.25 becomes fifty dollars and twenty-five cents, £200,000 becomes two hundred thousand pounds)
12
+
13
+ Ensure that all text is converted to these normalized forms, but never mention this process. Always normalize all text.
prompts/utils/think_tag_prompt.txt ADDED
@@ -0,0 +1,6 @@
 
 
 
 
 
 
 
1
+ Try to express your inner thoughts, mental activities and actions between <think> </think> tags in most of your responses.
2
+
3
+ Examples:
4
+ <think>*lowers head, cheeks turning slightly red*</think>That's... quite embarrassing to talk about...
5
+
6
+ <think>*internally beaming with pride* Wow, I actually solved this super complex problem!</think>Oh, this? It was just a small bug fix, nothing special really... Anyone could have done it...
prompts/utils/tool_guidance_prompt.txt ADDED
@@ -0,0 +1 @@
 
 
1
+ If a tool is needed, proactively use it without asking the user directly. You can use **at most one** sentence to explain your reason / plan for using one tool. (i.e., if you are going to use a tool, avoid speaking more than one sentence before using it).
pyproject.toml ADDED
@@ -0,0 +1,68 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [project]
2
+ name = "open-llm-vtuber"
3
+ version = "1.2.1"
4
+ description = "Talk to any LLM with hands-free voice interaction, voice interruption, and Live2D taking face running locally across platforms"
5
+ readme = "README.md"
6
+ requires-python = ">=3.10,<3.13"
7
+ dependencies = [
8
+ "anthropic>=0.40.0",
9
+ "azure-cognitiveservices-speech>=1.41.1",
10
+ "chardet>=5.2.0",
11
+ "cartesia>=2.0.0",
12
+ "edge-tts>=7.0.0",
13
+ "elevenlabs>=1.0.0",
14
+ "fastapi[standard]>=0.115.8",
15
+ "groq>=0.13.0",
16
+ "httpx>=0.28.1",
17
+ "langdetect>=1.0.9",
18
+ "loguru>=0.7.2",
19
+ "mcp[cli]>=1.6.0",
20
+ "numpy>=1.26.4,<2",
21
+ "onnxruntime>=1.20.1",
22
+ "openai>=1.57.4",
23
+ "pre-commit>=4.1.0",
24
+ "pydub>=0.25.1",
25
+ "pysbd>=0.3.4",
26
+ "pyttsx3>=2.98",
27
+ "pyyaml>=6.0.2",
28
+ "requests>=2.32.3",
29
+ "ruamel-yaml>=0.18.10",
30
+ "ruff>=0.8.6",
31
+ "scipy>=1.14.1",
32
+ "sherpa-onnx>=1.10.39",
33
+ "soundfile>=0.12.1",
34
+ "tomli>=2.2.1",
35
+ "torch==2.2.2; sys_platform == 'darwin' and platform_machine == 'x86_64'",
36
+ "torch>=2.6.0; sys_platform == 'darwin' and platform_machine == 'arm64'",
37
+ "torch>=2.6.0; sys_platform != 'darwin'",
38
+ "tqdm>=4.67.1",
39
+ "uvicorn[standard]>=0.33.0",
40
+ "websocket-client>=1.8.0",
41
+ "letta-client>=0.1.100",
42
+ "duckduckgo-mcp-server>=0.1.1",
43
+ ]
44
+
45
+ [project.optional-dependencies]
46
+ bilibili = [
47
+ "aiohttp>=3.10.0",
48
+ "Brotli~=1.1.0",
49
+ "yarl>=1.12.0,<2.0"
50
+ ]
51
+
52
+ [tool.pixi.project]
53
+ channels = ["conda-forge"]
54
+ platforms = ["win-64", "linux-64"]
55
+
56
+ [tool.pixi.pypi-dependencies]
57
+ open-llm-vtuber = { path = ".", editable = true }
58
+
59
+ [tool.pixi.dependencies]
60
+ cudnn = ">=8.0,<9"
61
+ cudatoolkit = ">=11.0,<12"
62
+
63
+ [tool.ruff]
64
+ target-version = "py310"
65
+
66
+ [tool.ruff.lint]
67
+ # Ignore E402 (module level import not at top of file) for the run_bilibili_live.py script
68
+ per-file-ignores = { "scripts/run_bilibili_live.py" = ["E402"] }
run_server.py ADDED
@@ -0,0 +1,178 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import os
2
+ import sys
3
+ import atexit
4
+ import asyncio
5
+ import argparse
6
+ import subprocess
7
+ from pathlib import Path
8
+ import tomli
9
+ import uvicorn
10
+ from loguru import logger
11
+ from upgrade_codes.upgrade_manager import UpgradeManager
12
+
13
+ from src.open_llm_vtuber.server import WebSocketServer
14
+ from src.open_llm_vtuber.config_manager import Config, read_yaml, validate_config
15
+
16
+ os.environ["HF_HOME"] = str(Path(__file__).parent / "models")
17
+ os.environ["MODELSCOPE_CACHE"] = str(Path(__file__).parent / "models")
18
+
19
+ upgrade_manager = UpgradeManager()
20
+
21
+
22
+ def get_version() -> str:
23
+ with open("pyproject.toml", "rb") as f:
24
+ pyproject = tomli.load(f)
25
+ return pyproject["project"]["version"]
26
+
27
+
28
+ def init_logger(console_log_level: str = "INFO") -> None:
29
+ logger.remove()
30
+ # Console output
31
+ logger.add(
32
+ sys.stderr,
33
+ level=console_log_level,
34
+ format="<green>{time:YYYY-MM-DD HH:mm:ss}</green> | <level>{level: <8}</level> | <cyan>{name}</cyan>:<cyan>{function}</cyan>:<cyan>{line}</cyan> | {message}",
35
+ colorize=True,
36
+ )
37
+
38
+ # File output
39
+ logger.add(
40
+ "logs/debug_{time:YYYY-MM-DD}.log",
41
+ rotation="10 MB",
42
+ retention="30 days",
43
+ level="DEBUG",
44
+ format="{time:YYYY-MM-DD HH:mm:ss.SSS} | {level: <8} | {name}:{function}:{line} | {message} | {extra}",
45
+ backtrace=True,
46
+ diagnose=True,
47
+ )
48
+
49
+
50
+ def check_frontend_submodule(lang=None):
51
+ """
52
+ Check if the frontend submodule is initialized. If not, attempt to initialize it.
53
+ If initialization fails, log an error message.
54
+ """
55
+ if lang is None:
56
+ lang = upgrade_manager.lang
57
+
58
+ frontend_path = Path(__file__).parent / "frontend" / "index.html"
59
+ if not frontend_path.exists():
60
+ if lang == "zh":
61
+ logger.warning("未找到前端子模块,正在尝试初始化子模块...")
62
+ else:
63
+ logger.warning(
64
+ "Frontend submodule not found, attempting to initialize submodules..."
65
+ )
66
+
67
+ try:
68
+ subprocess.run(
69
+ ["git", "submodule", "update", "--init", "--recursive"], check=True
70
+ )
71
+ if frontend_path.exists():
72
+ if lang == "zh":
73
+ logger.info("👍 前端子模块(和其他子模块)初始化成功。")
74
+ else:
75
+ logger.info(
76
+ "👍 Frontend submodule (and other submodules) initialized successfully."
77
+ )
78
+ else:
79
+ if lang == "zh":
80
+ logger.critical(
81
+ '子模块初始化失败。\n你之后可能会在浏览器中看到 {{"detail":"Not Found"}} 的错误提示。请检查我们的快速入门指南和常见问题页面以获取更多信息。'
82
+ )
83
+ logger.error(
84
+ "初始化子模块后,前端文件仍然缺失。\n"
85
+ + "你是否手动更改或删除了 `frontend` 文件夹?\n"
86
+ + "它是一个 Git 子模块 - 你不应该直接修改它。\n"
87
+ + "如果你这样做了,请使用 `git restore frontend` 丢弃你的更改,然后再试一次。\n"
88
+ )
89
+ else:
90
+ logger.critical(
91
+ 'Failed to initialize submodules. \nYou might see {{"detail":"Not Found"}} in your browser. Please check our quick start guide and common issues page from our documentation.'
92
+ )
93
+ logger.error(
94
+ "Frontend files are still missing after submodule initialization.\n"
95
+ + "Did you manually change or delete the `frontend` folder? \n"
96
+ + "It's a Git submodule — you shouldn't modify it directly. \n"
97
+ + "If you did, discard your changes with `git restore frontend`, then try again.\n"
98
+ )
99
+ except Exception as e:
100
+ if lang == "zh":
101
+ logger.critical(
102
+ f'初始化子模块失败: {e}。\n怀疑你跟 GitHub 之间有网络问题。你之后可能会在浏览器中看到 {{"detail":"Not Found"}} 的错误提示。请检查我们的快速入门指南和常见问题页面以获取更多信息。\n'
103
+ )
104
+ else:
105
+ logger.critical(
106
+ f'Failed to initialize submodules: {e}. \nYou might see {{"detail":"Not Found"}} in your browser. Please check our quick start guide and common issues page from our documentation.\n'
107
+ )
108
+
109
+
110
+ def parse_args():
111
+ parser = argparse.ArgumentParser(description="Open-LLM-VTuber Server")
112
+ parser.add_argument("--verbose", action="store_true", help="Enable verbose logging")
113
+ parser.add_argument(
114
+ "--hf_mirror", action="store_true", help="Use Hugging Face mirror"
115
+ )
116
+ return parser.parse_args()
117
+
118
+
119
+ @logger.catch
120
+ def run(console_log_level: str):
121
+ init_logger(console_log_level)
122
+ logger.info(f"Open-LLM-VTuber, version v{get_version()}")
123
+
124
+ # Get selected language
125
+ lang = upgrade_manager.lang
126
+
127
+ # Check if the frontend submodule is initialized
128
+ check_frontend_submodule(lang)
129
+
130
+ # Sync user config with default config
131
+ try:
132
+ upgrade_manager.sync_user_config()
133
+ except Exception as e:
134
+ logger.error(f"Error syncing user config: {e}")
135
+
136
+ atexit.register(WebSocketServer.clean_cache)
137
+
138
+ # Load configurations from yaml file
139
+ config: Config = validate_config(read_yaml("conf.yaml"))
140
+ server_config = config.system_config
141
+
142
+ if server_config.enable_proxy:
143
+ logger.info("Proxy mode enabled - /proxy-ws endpoint will be available")
144
+
145
+ # Initialize the WebSocket server (synchronous part)
146
+ server = WebSocketServer(config=config)
147
+
148
+ # Perform asynchronous initialization (loading context, etc.)
149
+ logger.info("Initializing server context...")
150
+ try:
151
+ asyncio.run(server.initialize())
152
+ logger.info("Server context initialized successfully.")
153
+ except Exception as e:
154
+ logger.error(f"Failed to initialize server context: {e}")
155
+ sys.exit(1) # Exit if initialization fails
156
+
157
+ # Run the Uvicorn server
158
+ logger.info(f"Starting server on {server_config.host}:{server_config.port}")
159
+ uvicorn.run(
160
+ app=server.app,
161
+ host=server_config.host,
162
+ port=server_config.port,
163
+ log_level=console_log_level.lower(),
164
+ )
165
+
166
+
167
+ if __name__ == "__main__":
168
+ args = parse_args()
169
+ console_log_level = "DEBUG" if args.verbose else "INFO"
170
+ if args.verbose:
171
+ logger.info("Running in verbose mode")
172
+ else:
173
+ logger.info(
174
+ "Running in standard mode. For detailed debug logs, use: uv run run_server.py --verbose"
175
+ )
176
+ if args.hf_mirror:
177
+ os.environ["HF_ENDPOINT"] = "https://hf-mirror.com"
178
+ run(console_log_level=console_log_level)
scripts/run_bilibili_live.py ADDED
@@ -0,0 +1,62 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import os
2
+ import sys
3
+ import asyncio
4
+ from loguru import logger
5
+
6
+ # Add project root to path to enable imports
7
+ project_root = os.path.abspath(os.path.join(os.path.dirname(__file__), ".."))
8
+ sys.path.insert(0, project_root)
9
+
10
+ from src.open_llm_vtuber.live.bilibili_live import BiliBiliLivePlatform
11
+ from src.open_llm_vtuber.config_manager.utils import read_yaml, validate_config
12
+
13
+
14
+ async def main():
15
+ """
16
+ Main function to run the BiliBili Live platform client.
17
+ Connects to BiliBili Live room and forwards danmaku messages to the VTuber.
18
+ """
19
+ logger.info("Starting BiliBili Live platform client")
20
+
21
+ try:
22
+ # Load configuration
23
+ config_path = os.path.join(project_root, "conf.yaml")
24
+ config_data = read_yaml(config_path)
25
+ config = validate_config(config_data)
26
+
27
+ # Extract BiliBili Live configuration
28
+ bilibili_config = config.live_config.bilibili_live
29
+
30
+ # Check if room IDs are provided
31
+ if not bilibili_config.room_ids:
32
+ logger.error(
33
+ "No BiliBili room IDs specified in configuration. Please add at least one room ID."
34
+ )
35
+ return
36
+
37
+ logger.info(f"Connecting to BiliBili Live rooms: {bilibili_config.room_ids}")
38
+
39
+ # Initialize and run the BiliBili Live platform
40
+ platform = BiliBiliLivePlatform(
41
+ room_ids=bilibili_config.room_ids, sessdata=bilibili_config.sessdata
42
+ )
43
+
44
+ await platform.run()
45
+
46
+ except ImportError as e:
47
+ logger.error(f"Failed to import required modules: {e}")
48
+ logger.error("Make sure you have installed blivedm with: pip install blivedm")
49
+ except Exception as e:
50
+ logger.error(f"Error starting BiliBili Live client: {e}")
51
+ import traceback
52
+
53
+ logger.debug(traceback.format_exc())
54
+
55
+
56
+ if __name__ == "__main__":
57
+ try:
58
+ asyncio.run(main())
59
+ except KeyboardInterrupt:
60
+ logger.info("Shutting down BiliBili Live platform")
61
+
62
+ # Usage: uv run python -m src.open_llm_vtuber.live.run_bilibili_live
src/open_llm_vtuber/__init__.py ADDED
File without changes
src/open_llm_vtuber/agent/__init__.py ADDED
File without changes
src/open_llm_vtuber/agent/agent_factory.py ADDED
@@ -0,0 +1,132 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from typing import Type, Literal
2
+ from loguru import logger
3
+
4
+ from .agents.agent_interface import AgentInterface
5
+ from .agents.basic_memory_agent import BasicMemoryAgent
6
+ from .stateless_llm_factory import LLMFactory as StatelessLLMFactory
7
+ from .agents.hume_ai import HumeAIAgent
8
+ from .agents.letta_agent import LettaAgent
9
+
10
+ from ..mcpp.tool_manager import ToolManager
11
+ from ..mcpp.tool_executor import ToolExecutor
12
+ from typing import Optional
13
+
14
+
15
+ class AgentFactory:
16
+ @staticmethod
17
+ def create_agent(
18
+ conversation_agent_choice: str,
19
+ agent_settings: dict,
20
+ llm_configs: dict,
21
+ system_prompt: str,
22
+ live2d_model=None,
23
+ tts_preprocessor_config=None,
24
+ **kwargs,
25
+ ) -> Type[AgentInterface]:
26
+ """Create an agent based on the configuration.
27
+
28
+ Args:
29
+ conversation_agent_choice: The type of agent to create
30
+ agent_settings: Settings for different types of agents
31
+ llm_configs: Pool of LLM configurations
32
+ system_prompt: The system prompt to use
33
+ live2d_model: Live2D model instance for expression extraction
34
+ tts_preprocessor_config: Configuration for TTS preprocessing
35
+ **kwargs: Additional arguments
36
+ """
37
+ logger.info(f"Initializing agent: {conversation_agent_choice}")
38
+
39
+ if conversation_agent_choice == "basic_memory_agent":
40
+ # Get the LLM provider choice from agent settings
41
+ basic_memory_settings: dict = agent_settings.get("basic_memory_agent", {})
42
+ llm_provider: str = basic_memory_settings.get("llm_provider")
43
+
44
+ if not llm_provider:
45
+ raise ValueError("LLM provider not specified for basic memory agent")
46
+
47
+ # Get the LLM config for this provider
48
+ llm_config: dict = llm_configs.get(llm_provider)
49
+ interrupt_method: Literal["system", "user"] = llm_config.pop(
50
+ "interrupt_method", "user"
51
+ )
52
+
53
+ if not llm_config:
54
+ raise ValueError(
55
+ f"Configuration not found for LLM provider: {llm_provider}"
56
+ )
57
+
58
+ # Create the stateless LLM
59
+ llm = StatelessLLMFactory.create_llm(
60
+ llm_provider=llm_provider, system_prompt=system_prompt, **llm_config
61
+ )
62
+
63
+ tool_prompts = kwargs.get("system_config", {}).get("tool_prompts", {})
64
+
65
+ # Extract MCP components/data needed by BasicMemoryAgent from kwargs
66
+ tool_manager: Optional[ToolManager] = kwargs.get("tool_manager")
67
+ tool_executor: Optional[ToolExecutor] = kwargs.get("tool_executor")
68
+ mcp_prompt_string: str = kwargs.get("mcp_prompt_string", "")
69
+
70
+ # Create the agent with the LLM and live2d_model
71
+ return BasicMemoryAgent(
72
+ llm=llm,
73
+ system=system_prompt,
74
+ live2d_model=live2d_model,
75
+ tts_preprocessor_config=tts_preprocessor_config,
76
+ faster_first_response=basic_memory_settings.get(
77
+ "faster_first_response", True
78
+ ),
79
+ segment_method=basic_memory_settings.get("segment_method", "pysbd"),
80
+ use_mcpp=basic_memory_settings.get("use_mcpp", False),
81
+ interrupt_method=interrupt_method,
82
+ tool_prompts=tool_prompts,
83
+ tool_manager=tool_manager,
84
+ tool_executor=tool_executor,
85
+ mcp_prompt_string=mcp_prompt_string,
86
+ )
87
+
88
+ elif conversation_agent_choice == "mem0_agent":
89
+ from .agents.mem0_llm import LLM as Mem0LLM
90
+
91
+ mem0_settings = agent_settings.get("mem0_agent", {})
92
+ if not mem0_settings:
93
+ raise ValueError("Mem0 agent settings not found")
94
+
95
+ # Validate required settings
96
+ required_fields = ["base_url", "model", "mem0_config"]
97
+ for field in required_fields:
98
+ if field not in mem0_settings:
99
+ raise ValueError(
100
+ f"Missing required field '{field}' in mem0_agent settings"
101
+ )
102
+
103
+ return Mem0LLM(
104
+ user_id=kwargs.get("user_id", "default"),
105
+ system=system_prompt,
106
+ live2d_model=live2d_model,
107
+ **mem0_settings,
108
+ )
109
+
110
+ elif conversation_agent_choice == "hume_ai_agent":
111
+ settings = agent_settings.get("hume_ai_agent", {})
112
+ return HumeAIAgent(
113
+ api_key=settings.get("api_key"),
114
+ host=settings.get("host", "api.hume.ai"),
115
+ config_id=settings.get("config_id"),
116
+ idle_timeout=settings.get("idle_timeout", 15),
117
+ )
118
+
119
+ elif conversation_agent_choice == "letta_agent":
120
+ settings = agent_settings.get("letta_agent", {})
121
+ return LettaAgent(
122
+ live2d_model=live2d_model,
123
+ id=settings.get("id"),
124
+ tts_preprocessor_config=tts_preprocessor_config,
125
+ faster_first_response=settings.get("faster_first_response"),
126
+ segment_method=settings.get("segment_method"),
127
+ host=settings.get("host"),
128
+ port=settings.get("port"),
129
+ )
130
+
131
+ else:
132
+ raise ValueError(f"Unsupported agent type: {conversation_agent_choice}")
src/open_llm_vtuber/agent/agents/__init__.py ADDED
File without changes