Spaces:
Sleeping
Sleeping
NickNYU
[bugfix]fix the cut-off issue due to LLM predict token limit(256 for openai python lib default), by setting temperature to 0 and set LLM predict method from compact-refine to refine
bd59653 | llama_index>=0.6.3 | |
| llama_hub | |
| streamlit | |
| ruff | |
| black | |
| mypy | |
| accelerate | |
| python-dotenv | |
| sentence_transformers | |
| wandb |