text stringlengths 0 445 |
|---|
```bash |
python ./rag.py --index "bm25" --dataset "hotpotqa-train" --similarity bertscore \ |
--maxKnowledge 80 --maxParagraph 100 --maxQuestion 80 --topk 3 \ |
--modelname "meta-llama/Llama-3.1-8B-Instruct" --randomSeed 0 \ |
--output "./rag_results.txt" |
``` |
### Note: |
#### `--maxKnowledge` parameter notice: |
> [!NOTE] |
> Approximate Tokens count corresponding to knowledge document size of "squad-train" and "hotpotqa-train" dataset. |
> datasets=("squad-train") |
> - when k = 3, tokens = 21,000 |
> - when k = 4, tokens = 32,000 |
> - when k = 7, tokens = 50,000 |
> |
> datasets=("hotpotqa-train") |
> - all k = 7405 article, tokens = 10,038,084 |
> - when k = 1, tokens = 1,400 |
> - when k = 16, tokens = 22,400 |
> - when k = 24, tokens = 33,667 |
> - when k = 32, tokens = 44,800 |
> - when k = 48, tokens = 64,000 |
> - when k = 64, tokens = 85,000 |
> - when k = 80, tokens = 106,000 |
#### `--maxQuestion` parameter notice: |
> - when using "squad-train" dataset, 1 knowledge has average 150 questions |
> - when using "hotpotqa-train" dataset, 1 knowledge has 1 question |
> [!TIP] |
> Since 1 document in "hotpoqa-train" dataset has only 1 question, it may not satisfy large-scale evaluation. |
> Multiple evaluation could be a relatively better approach. |
> |
## Citation |
``` |
@misc{chan2024dontragcacheaugmentedgeneration, |
title={Don't Do RAG: When Cache-Augmented Generation is All You Need for Knowledge Tasks}, |
author={Brian J Chan and Chao-Ting Chen and Jui-Hung Cheng and Hen-Hsen Huang}, |
year={2024}, |
eprint={2412.15605}, |
archivePrefix={arXiv}, |
primaryClass={cs.CL}, |
url={https://arxiv.org/abs/2412.15605}, |
} |
``` |
#!/bin/bash |
# squad dataset |
curl -L -o ./datasets/squad/stanford-question-answering-dataset.zip\ |
ERROR: type should be string, got " https://www.kaggle.com/api/v1/datasets/download/stanfordu/stanford-question-answering-dataset" |
unzip ./datasets/squad/stanford-question-answering-dataset.zip -d ./datasets/squad/ |
rm ./datasets/squad/stanford-question-answering-dataset.zip |
# hotpotqa dataset |
curl -L -o ./datasets/hotpotqa/hotpotqa-question-answering-dataset.zip\ |
ERROR: type should be string, got " https://www.kaggle.com/api/v1/datasets/download/jeromeblanchet/hotpotqa-question-answering-dataset" |
unzip ./datasets/hotpotqa/hotpotqa-question-answering-dataset.zip -d ./datasets/hotpotqa/ |
rm ./datasets/hotpotqa/hotpotqa-question-answering-dataset.zip |
import torch |
import pandas as pd |
import argparse |
import os |
import json |
from time import time |
from sentence_transformers import SentenceTransformer, util |
from transformers import BitsAndBytesConfig, AutoTokenizer, AutoModelForCausalLM |
from transformers.cache_utils import DynamicCache |
import random |
def get_env(): |
env_dict = {} |
with open(file=".env" if os.path.exists(".env") else "env", mode="r") as f: |
for line in f: |
key, value = line.strip().split("=") |
env_dict[key] = value.strip('"') |
return env_dict |
"""Hugging Face Llama model""" |
HF_TOKEN = get_env()["HF_TOKEN"] |
global model_name, model, tokenizer |
global rand_seed |
def generate( |
model, |
input_ids: torch.Tensor, |
past_key_values, |
max_new_tokens: int = 300 |
) -> torch.Tensor: |
""" |
Generate text with greedy decoding. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.