junchenfu commited on
Commit
9056611
·
verified ·
1 Parent(s): 0d1e8bd

Upload readme.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. readme.md +76 -0
readme.md ADDED
@@ -0,0 +1,76 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ tags:
4
+ - text-to-video
5
+ - prompt-engineering
6
+ - video-generation
7
+ - llm
8
+ - rag
9
+ - research
10
+ datasets:
11
+ - junchenfu/llmpopcorn_prompts
12
+ pipeline_tag: text-generation
13
+ ---
14
+
15
+ # LLMPopcorn Usage Instructions
16
+
17
+ Welcome to LLMPopcorn! This guide will help you generate video titles and prompts, as well as create AI-generated videos based on those prompts.
18
+
19
+ ## Prerequisites
20
+
21
+ ### Install Required Python Packages
22
+
23
+ Before running the scripts, ensure that you have installed the necessary Python packages. You can do this by executing the following command:
24
+
25
+ ```bash
26
+ pip install torch transformers diffusers tqdm numpy pandas sentence-transformers faiss-cpu openai huggingface_hub safetensors
27
+ ```
28
+
29
+ **Download the Dataset**:
30
+ Download the Microlens dataset and place it in the `Microlens` folder for use with `PE.py`.
31
+
32
+ ## Step 1: Generate Video Titles and Prompts
33
+
34
+ To generate video titles and prompts, run the `LLMPopcorn.py` script:
35
+ ```bash
36
+ python LLMPopcorn.py
37
+ ```
38
+
39
+ To enhance LLMPopcorn, execute the `PE.py` script:
40
+ ```bash
41
+ python PE.py
42
+ ```
43
+
44
+ ## Step 2: Generate AI Videos
45
+
46
+ To create AI-generated videos, execute the `generating_images_videos_three.py` script:
47
+ ```bash
48
+ python generating_images_videos_three.py
49
+ ```
50
+
51
+ ## Step 3: Clone the Evaluation Code
52
+
53
+ Then, following the instructions in the MMRA repository, you can evaluate the generated videos.
54
+
55
+ ## Tutorial: Using the Prompts Dataset
56
+
57
+ You can easily download and use the structured prompts directly from Hugging Face:
58
+
59
+ ### 1. Install `datasets`
60
+ ```bash
61
+ pip install datasets
62
+ ```
63
+
64
+ ### 2. Load the Dataset in Python
65
+ ```python
66
+ from datasets import load_dataset
67
+
68
+ # Load the LLMPopcorn prompts
69
+ dataset = load_dataset("junchenfu/llmpopcorn_prompts")
70
+
71
+ # Access the data (abstract or concrete)
72
+ for item in dataset["train"]:
73
+ print(f"Type: {item['type']}, Prompt: {item['prompt']}")
74
+ ```
75
+
76
+ This dataset contains both abstract and concrete prompts, which you can use as input for the video generation scripts in Step 2.