PranavReddy18 commited on
Commit
39e517d
Β·
verified Β·
1 Parent(s): 139dac8

Upload 7 files

Browse files
Files changed (7) hide show
  1. .env +3 -0
  2. .gitignore +2 -0
  3. READme.md +32 -0
  4. Research/Ai.ipynb +238 -0
  5. app.py +62 -0
  6. main.py +88 -0
  7. requirements.txt +9 -0
.env ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ GROQ_API_KEY="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"
2
+ LANGCHAIN_API_KEY="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"
3
+ LANGSMITH_PROJECT="AI Code Reviewer"
.gitignore ADDED
@@ -0,0 +1,2 @@
 
 
 
1
+ venv/
2
+ .env
READme.md ADDED
@@ -0,0 +1,32 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ## πŸ€– AI Code Reviewer πŸ“
2
+
3
+ πŸš€ Overview
4
+
5
+ AI Code Reviewer is a Python-based application that leverages FastAPI and Streamlit to provide instant feedback on Python code. The app integrates LangChain and Groq's LLM (Gemma2-9b-it) to analyze code snippets, detect potential issues, and suggest improvements.
6
+
7
+ πŸ›  Features
8
+
9
+ πŸ” Instant Code Review: Get feedback on Python code, including error detection and fixes.
10
+
11
+ πŸ’‘ Bug Detection: Identifies common mistakes like indentation errors and division by zero.
12
+
13
+ 🎯 Corrected Code Suggestions: Provides corrected versions of problematic code snippets.
14
+
15
+ πŸš€ FastAPI Backend: A lightweight, high-performance API for processing requests.
16
+
17
+ 🌐 Streamlit Frontend: User-friendly web interface for easy interaction.
18
+
19
+ πŸ—οΈ Tech Stack
20
+
21
+ Python 3.11
22
+
23
+ FastAPI (Backend API)
24
+
25
+ Streamlit (Frontend UI)
26
+
27
+ LangChain (LLM-based processing)
28
+
29
+ Groq LLM (Gemma2-9b-it for language understanding)
30
+
31
+ Uvicorn (ASGI server)
32
+
Research/Ai.ipynb ADDED
@@ -0,0 +1,238 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "cells": [
3
+ {
4
+ "cell_type": "markdown",
5
+ "metadata": {},
6
+ "source": [
7
+ "## AI Code Reviewer with Langchain and Groq"
8
+ ]
9
+ },
10
+ {
11
+ "cell_type": "code",
12
+ "execution_count": 1,
13
+ "metadata": {},
14
+ "outputs": [
15
+ {
16
+ "data": {
17
+ "text/plain": [
18
+ "True"
19
+ ]
20
+ },
21
+ "execution_count": 1,
22
+ "metadata": {},
23
+ "output_type": "execute_result"
24
+ }
25
+ ],
26
+ "source": [
27
+ "from dotenv import load_dotenv\n",
28
+ "load_dotenv()"
29
+ ]
30
+ },
31
+ {
32
+ "cell_type": "code",
33
+ "execution_count": 2,
34
+ "metadata": {},
35
+ "outputs": [],
36
+ "source": [
37
+ "from langchain_groq import ChatGroq \n",
38
+ "from langchain.chains import RetrievalQA\n",
39
+ "from langchain.chains import LLMChain\n",
40
+ "\n"
41
+ ]
42
+ },
43
+ {
44
+ "cell_type": "code",
45
+ "execution_count": 3,
46
+ "metadata": {},
47
+ "outputs": [],
48
+ "source": [
49
+ "import os \n",
50
+ "os.environ[\"GROQ_API_KEY\"]=os.getenv(\"GROQ_API_KEY\")\n",
51
+ "os.environ[\"LANGSMITH_TRACING_V2\"]=\"true\"\n",
52
+ "os.environ[\"LANGSMITH_ENDPOINT\"]=\"https://api.smith.langchain.com\"\n",
53
+ "os.environ[\"LANGCHAIN_API_KEY\"]=os.getenv(\"LANGCHAIN_API_KEY\")\n",
54
+ "os.environ[\"LANGSMITH_PROJECT\"]=\"AI Code Reviewer\""
55
+ ]
56
+ },
57
+ {
58
+ "cell_type": "markdown",
59
+ "metadata": {},
60
+ "source": [
61
+ "## Loading The Model"
62
+ ]
63
+ },
64
+ {
65
+ "cell_type": "code",
66
+ "execution_count": 4,
67
+ "metadata": {},
68
+ "outputs": [],
69
+ "source": [
70
+ "GROQ_API_KEY=os.getenv(\"GROQ_API_KEY\")\n",
71
+ "llm=ChatGroq(api_key=GROQ_API_KEY,model_name=\"gemma2-9b-it\")"
72
+ ]
73
+ },
74
+ {
75
+ "cell_type": "code",
76
+ "execution_count": 5,
77
+ "metadata": {},
78
+ "outputs": [],
79
+ "source": [
80
+ "response=llm.invoke(\"import numy as np\")"
81
+ ]
82
+ },
83
+ {
84
+ "cell_type": "code",
85
+ "execution_count": 6,
86
+ "metadata": {},
87
+ "outputs": [
88
+ {
89
+ "data": {
90
+ "text/plain": [
91
+ "'It seems like you\\'re trying to import the NumPy library. \\n\\nHowever, there\\'s a slight typo in your code. \"numy\" should be \"numpy\".\\n\\nHere\\'s the corrected import statement:\\n\\n```python\\nimport numpy as np\\n```\\n\\nThis line of code imports the NumPy library and gives it the alias \"np\". This is a common convention in Python, allowing you to use \"np\" instead of writing out \"numpy\" every time you need to use a NumPy function or object.\\n\\n\\n\\nLet me know if you have any other questions or need help with NumPy!\\n'"
92
+ ]
93
+ },
94
+ "execution_count": 6,
95
+ "metadata": {},
96
+ "output_type": "execute_result"
97
+ }
98
+ ],
99
+ "source": [
100
+ "response.content"
101
+ ]
102
+ },
103
+ {
104
+ "cell_type": "markdown",
105
+ "metadata": {},
106
+ "source": [
107
+ "## Trying out with Different Prompts"
108
+ ]
109
+ },
110
+ {
111
+ "cell_type": "code",
112
+ "execution_count": 7,
113
+ "metadata": {},
114
+ "outputs": [],
115
+ "source": [
116
+ "from langchain.prompts import FewShotPromptTemplate, PromptTemplate\n",
117
+ "\n",
118
+ "# Define example responses for few-shot prompting\n",
119
+ "examples = [\n",
120
+ " {\n",
121
+ " \"input\": \"def add(a, b):\\nreturn a + b\",\n",
122
+ " \"output\": \"Your function 'add' is missing proper indentation. Here's a corrected version:\\n\\ndef add(a, b):\\n return a + b\\n\"\n",
123
+ " },\n",
124
+ " {\n",
125
+ " \"input\": \"def divide(a, b):\\n return a / b\",\n",
126
+ " \"output\": \"Potential bug detected: Division by zero error. You should handle this case:\\n\\ndef divide(a, b):\\n if b == 0:\\n return 'Error: Division by zero'\\n return a / b\\n\"\n",
127
+ " }\n",
128
+ "]\n",
129
+ "\n",
130
+ "# Define example template\n",
131
+ "example_template = PromptTemplate(\n",
132
+ " input_variables=[\"input\", \"output\"],\n",
133
+ " template=\"Code: \\n{input}\\n\\nFeedback:\\n{output}\\n\"\n",
134
+ ")\n",
135
+ "prefix=\"\"\"You are a highly skilled Python code reviewer. \n",
136
+ "Your task is to analyze the given Python code, identify potential bugs, suggest improvements, and provide a corrected version of the code if necessary. Ensure that your feedback is clear, precise, and actionable.\n",
137
+ "First you have to specify where and what the error is.\n",
138
+ "Next give the correct code\n",
139
+ "\n",
140
+ "\"\"\"\n",
141
+ "\n",
142
+ "# Create a few-shot prompt template\n",
143
+ "few_shot_prompt = FewShotPromptTemplate(\n",
144
+ " examples=examples,\n",
145
+ " example_prompt=example_template,\n",
146
+ " prefix=prefix,\n",
147
+ " suffix=\"Code:\\n{input}\\n\\nFeedback:\",\n",
148
+ " input_variables=[\"input\"]\n",
149
+ ")\n"
150
+ ]
151
+ },
152
+ {
153
+ "cell_type": "code",
154
+ "execution_count": 8,
155
+ "metadata": {},
156
+ "outputs": [
157
+ {
158
+ "name": "stderr",
159
+ "output_type": "stream",
160
+ "text": [
161
+ "C:\\Users\\saipr\\AppData\\Local\\Temp\\ipykernel_28632\\3821535042.py:2: LangChainDeprecationWarning: The class `LLMChain` was deprecated in LangChain 0.1.17 and will be removed in 1.0. Use :meth:`~RunnableSequence, e.g., `prompt | llm`` instead.\n",
162
+ " llm_chain = LLMChain(llm=llm, prompt=few_shot_prompt)\n"
163
+ ]
164
+ }
165
+ ],
166
+ "source": [
167
+ "# Create the LLMChain\n",
168
+ "llm_chain = LLMChain(llm=llm, prompt=few_shot_prompt)"
169
+ ]
170
+ },
171
+ {
172
+ "cell_type": "code",
173
+ "execution_count": 9,
174
+ "metadata": {},
175
+ "outputs": [
176
+ {
177
+ "name": "stderr",
178
+ "output_type": "stream",
179
+ "text": [
180
+ "C:\\Users\\saipr\\AppData\\Local\\Temp\\ipykernel_28632\\911030789.py:3: LangChainDeprecationWarning: The method `Chain.run` was deprecated in langchain 0.1.0 and will be removed in 1.0. Use :meth:`~invoke` instead.\n",
181
+ " response = llm_chain.run(input=code_snippet)\n"
182
+ ]
183
+ },
184
+ {
185
+ "name": "stdout",
186
+ "output_type": "stream",
187
+ "text": [
188
+ "The error is a simple typo. \n",
189
+ "\n",
190
+ "`nump` should be `numpy`. \n",
191
+ "\n",
192
+ "Here's the corrected code:\n",
193
+ "\n",
194
+ "```python\n",
195
+ "import numpy as np\n",
196
+ "``` \n",
197
+ "\n"
198
+ ]
199
+ }
200
+ ],
201
+ "source": [
202
+ "\n",
203
+ "# Example usage\n",
204
+ "code_snippet = \"import nump as np\"\n",
205
+ "response = llm_chain.run(input=code_snippet)\n",
206
+ "print(response)\n"
207
+ ]
208
+ },
209
+ {
210
+ "cell_type": "code",
211
+ "execution_count": null,
212
+ "metadata": {},
213
+ "outputs": [],
214
+ "source": []
215
+ }
216
+ ],
217
+ "metadata": {
218
+ "kernelspec": {
219
+ "display_name": "Python 3",
220
+ "language": "python",
221
+ "name": "python3"
222
+ },
223
+ "language_info": {
224
+ "codemirror_mode": {
225
+ "name": "ipython",
226
+ "version": 3
227
+ },
228
+ "file_extension": ".py",
229
+ "mimetype": "text/x-python",
230
+ "name": "python",
231
+ "nbconvert_exporter": "python",
232
+ "pygments_lexer": "ipython3",
233
+ "version": "3.10.0"
234
+ }
235
+ },
236
+ "nbformat": 4,
237
+ "nbformat_minor": 2
238
+ }
app.py ADDED
@@ -0,0 +1,62 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import os
2
+ import streamlit as st
3
+ from langchain.prompts import FewShotPromptTemplate, PromptTemplate
4
+ from langchain.chains import LLMChain
5
+ from langchain_groq import ChatGroq
6
+ from dotenv import load_dotenv
7
+ load_dotenv()
8
+ # Set up API key
9
+ GROQ_API_KEY = os.getenv("GROQ_API_KEY")
10
+ llm = ChatGroq(api_key=GROQ_API_KEY, model_name="gemma2-9b-it")
11
+
12
+
13
+ # Define example responses for few-shot prompting
14
+ examples = [
15
+ {
16
+ "input": "def add(a, b):\nreturn a + b",
17
+ "output": "Your function 'add' is missing proper indentation. Here's a corrected version:\n\ndef add(a, b):\n return a + b\n"
18
+ },
19
+ {
20
+ "input": "def divide(a, b):\n return a / b",
21
+ "output": "Potential bug detected: Division by zero error. You should handle this case:\n\ndef divide(a, b):\n if b == 0:\n return 'Error: Division by zero'\n return a / b\n"
22
+ }
23
+ ]
24
+
25
+ # Define example template
26
+ example_template = PromptTemplate(
27
+ input_variables=["input", "output"],
28
+ template="Code: \n{input}\n\nFeedback:\n{output}\n"
29
+ )
30
+ prefix="""You are a highly skilled Python code reviewer.
31
+ Your task is to analyze the given Python code, identify potential bugs, suggest improvements, and provide a corrected version of the code if necessary. Ensure that your feedback is clear, precise, and actionable.
32
+ First you have to specify where and what the error is.
33
+ Next give the correct code
34
+ If the code is out of context replay "Out of Context"
35
+
36
+ """
37
+
38
+ # Create a few-shot prompt template
39
+ few_shot_prompt = FewShotPromptTemplate(
40
+ examples=examples,
41
+ example_prompt=example_template,
42
+ prefix=prefix,
43
+ suffix="Code:\n{input}\n\nFeedback:",
44
+ input_variables=["input"]
45
+ )
46
+
47
+ # Create the LLMChain
48
+ llm_chain = LLMChain(llm=llm, prompt=few_shot_prompt)
49
+
50
+ # Streamlit App
51
+ st.title("πŸ€– AI Code Reviewer πŸ“")
52
+ st.markdown("### Get instant feedback on your Python code! πŸš€")
53
+
54
+ code_snippet = st.text_area("✍️ Enter Python code below:", height=200)
55
+
56
+ if st.button("πŸ” Review Code"):
57
+ if code_snippet.strip():
58
+ response = llm_chain.run(input=code_snippet)
59
+ st.subheader("🧐 Review Feedback:")
60
+ st.code(response, language="python")
61
+ else:
62
+ st.warning("⚠️ Please enter some Python code to review!")
main.py ADDED
@@ -0,0 +1,88 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import os
2
+ import streamlit as st
3
+ from dotenv import load_dotenv
4
+ from fastapi import FastAPI, HTTPException
5
+ from pydantic import BaseModel
6
+ from langchain.prompts import FewShotPromptTemplate, PromptTemplate
7
+ from langchain.chains import LLMChain
8
+ from langchain_groq import ChatGroq
9
+ import uvicorn
10
+
11
+ # Load environment variables
12
+ load_dotenv()
13
+
14
+ # Set up API key
15
+ GROQ_API_KEY = os.getenv("GROQ_API_KEY")
16
+
17
+ if not GROQ_API_KEY:
18
+ raise ValueError("🚨 API Key Missing! Please check your .env file and restart the app.")
19
+
20
+ llm = ChatGroq(api_key=GROQ_API_KEY, model_name="gemma2-9b-it")
21
+
22
+ # Define example responses for few-shot prompting
23
+ examples = [
24
+ {
25
+ "input": "def add(a, b):\nreturn a + b",
26
+ "output": "Your function 'add' is missing proper indentation. Here's a corrected version:\n\ndef add(a, b):\n return a + b\n"
27
+ },
28
+ {
29
+ "input": "def divide(a, b):\n return a / b",
30
+ "output": "Potential bug detected: Division by zero error. You should handle this case:\n\ndef divide(a, b):\n if b == 0:\n return 'Error: Division by zero'\n return a / b\n"
31
+ }
32
+ ]
33
+
34
+ # Define example template
35
+ example_template = PromptTemplate(
36
+ input_variables=["input", "output"],
37
+ template="Code: \n{input}\n\nFeedback:\n{output}\n"
38
+ )
39
+ prefix="""You are a highly skilled Python code reviewer.
40
+ Your task is to analyze the given Python code, identify potential bugs, suggest improvements, and provide a corrected version of the code if necessary. Ensure that your feedback is clear, precise, and actionable.
41
+ First you have to specify where and what the error is.
42
+ Next give the correct code
43
+ If the code is out of context replay "Out of Context"
44
+
45
+ """
46
+
47
+ # Create a few-shot prompt template
48
+ few_shot_prompt = FewShotPromptTemplate(
49
+ examples=examples,
50
+ example_prompt=example_template,
51
+ prefix=prefix,
52
+ suffix="Code:\n{input}\n\nFeedback:",
53
+ input_variables=["input"]
54
+ )
55
+
56
+ # Create the LLMChain
57
+ llm_chain = LLMChain(llm=llm, prompt=few_shot_prompt)
58
+
59
+ # FastAPI Backend
60
+ app = FastAPI()
61
+
62
+ class CodeReviewRequest(BaseModel):
63
+ code: str
64
+
65
+ @app.post("/review")
66
+ def review_code(request: CodeReviewRequest):
67
+ if not request.code.strip():
68
+ raise HTTPException(status_code=400, detail="No code provided.")
69
+ response = llm_chain.run(input=request.code)
70
+ return {"feedback": response}
71
+
72
+ # Streamlit Frontend
73
+ st.title("πŸ€– AI Code Reviewer πŸ“")
74
+ st.markdown("### Get instant feedback on your Python code! πŸš€")
75
+
76
+ code_snippet = st.text_area("✍️ Paste your Python code below:", height=200)
77
+
78
+ if st.button("πŸ” Review Code"):
79
+ if code_snippet.strip():
80
+ response = llm_chain.run(input=code_snippet)
81
+ st.subheader("🧐 Review Feedback:")
82
+ st.code(response, language="python")
83
+ else:
84
+ st.warning("⚠️ Please enter some Python code to review!")
85
+
86
+ # Run FastAPI backend (for local testing)
87
+ if __name__ == "__main__":
88
+ uvicorn.run(app, host="0.0.0.0", port=8001)
requirements.txt ADDED
@@ -0,0 +1,9 @@
 
 
 
 
 
 
 
 
 
 
1
+ langchain
2
+ langchain-community
3
+ langchain-groq
4
+ python-dotenv
5
+ fastapi
6
+ streamlit
7
+ langsmith
8
+ langserve
9
+ uvicorn