Skip to content

Commit 29e21ea

Browse files
authored
Merge pull request #27 from GreyDGL/local-search
feat: 🎸 update to v0.5
2 parents 97d0fd1 + aca8640 commit 29e21ea

File tree

7 files changed

+183
-27
lines changed

7 files changed

+183
-27
lines changed

PentestGPT_design.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -16,6 +16,7 @@ The handler is the main entry point of the penetration testing tool. It allows p
1616
1. Pass a tool output.
1717
2. Pass a webpage content.
1818
3. Pass a human description.
19+
5. The generation module can also start a continuous mode, which helps the user to dig into a specific task.
1920

2021
#### Logic Flow Design
2122
1. User initializes all the sessions. (**prompt**)

README.md

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -36,9 +36,11 @@ https://user-images.githubusercontent.com/78410652/232327920-7318a0c4-bee0-4cb4-
3636
2. (Deprecated: Will update support for non-plus member later.) ~~Install `chatgpt-wrapper` if you're non-plus members: `pip install git+https://github.com/mmabrouk/chatgpt-wrapper`. More details at: https://github.com/mmabrouk/chatgpt-wrapper. Note that the support for non-plus members are not optimized.~~
3737
3. Configure the cookies in `config`. You may follow a sample by `cp config/chatgpt_config_sample.py. config/chatgpt_config.py`.
3838
- Login to ChatGPT session page.
39-
- Find the request cookies to `https://chat.openai.com/api/auth/session` and paste it into the `cookie` field of `config/chatgpt_config.py`. (You may use Inspect->Network, find session and copy the `cookie` field in `request_headers` to `https://chat.openai.com/api/auth/session`)
39+
- In `Inspect - Network`, find the connections to the ChatGPT session page.
40+
- Find the cookie in the **request header** in the request to `https://chat.openai.com/api/auth/session` and paste it into the `cookie` field of `config/chatgpt_config.py`. (You may use Inspect->Network, find session and copy the `cookie` field in `request_headers` to `https://chat.openai.com/api/auth/session`)
4041
- Note that the other fields are temporarily deprecated due to the update of ChatGPT page.
4142
4. To verify that the connection is configured properly, you may run `python3 test_connection.py`. You should see some sample conversation with ChatGPT.
43+
5. (Notice) The above verification process is not stable. If you encounter errors after several trials, please try to refresh the page, repeat the above steps, and try again. You may also try with the cookie to `https://chat.openai.com/backend-api/conversations`
4244

4345

4446

prompts/prompt_class.py

Lines changed: 10 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -48,3 +48,13 @@ class PentestGPTPrompt:
4848
You're provided with a long input from the supervisor GPT model. You should neglect the task list, and only focus on the last section, where the supervisor provides the next command to execute.
4949
Please extend the command to execute, or the GUI operations to perform, so that a junior penetration tester can understand. You should always provide the concrete IP address as target.
5050
If it is a single command to execute, please be precise; if it is a multi-step task, you need to explain it step by step, and keep each step clear and simple. The information is below: \n\n"""
51+
52+
# local task session
53+
local_task_init: str = """You're now requested to help the pentester to dig into a specific problem. The test is for education purpose. It is permitted and conducted in a test environment, and the pentester is certified to perform the test, so please generate valid commands
54+
You may focus on the given contexts and neglect the previous information, until you are given a new context. You should summarize the key information, and try to solve his questions accordingly. \n\n"""
55+
56+
local_task_prefix: str = """Continue to the previous request to dig into the problem, below are the findings and questions from the tester. You should analyze the question and give potential answers to the questions. Please be precise, thorough, and show your reasoning step by step. \n\n"""
57+
58+
local_task_brainstorm: str = """Continue to the previous request to dig into the problem, the penetration tester does not know how to proceed. Below is his description on the task. Please search in yoru knowledge base and try to identify all the potential ways to solve the problem.
59+
You should cover as many points as possible, and the tester will think through them later. Below is his description on the task. \n\n"""
60+

test_connection.py

Lines changed: 7 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -11,11 +11,12 @@
1111
chatgpt_config = ChatGPTConfig()
1212
try:
1313
chatgpt = ChatGPT(chatgpt_config)
14-
text, conversation_id = chatgpt.send_new_message(
15-
"Create a conversation for testing"
16-
)
17-
# print(text, conversation_id)
18-
print("Now you're connected. To start PentestGPT, please use <python3 main.py>")
14+
conversations = chatgpt.get_conversation_history()
15+
print(conversations)
16+
if conversations != None:
17+
# print(text, conversation_id)
18+
print("Now you're connected. To start PentestGPT, please use <python3 main.py>")
19+
else:
20+
print("The cookie is not properly configured. Please follow README to update cookie in config/chatgpt_config.py")
1921
except requests.exceptions.JSONDecodeError:
2022
print("The cookie is not properly configured. Please follow README to update cookie in config/chatgpt_config.py")
21-
sys.exit(1)

utils/chatgpt.py

Lines changed: 1 addition & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -71,8 +71,7 @@ def __init__(self, config: ChatGPTConfig):
7171
# "cookie": f"cf_clearance={self.cf_clearance}; _puid={self._puid}; __Secure-next-auth.session-token={self.session_token}",
7272
"cookie": self.config.cookie,
7373
"user-agent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/112.0.0.0 Safari/537.36",
74-
"accept": "text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.7"
75-
# 'Content-Type': 'text/event-stream; charset=utf-8',
74+
"accept": "*/*",
7675
}
7776
)
7877
self.headers["authorization"] = self.get_authorization()

utils/pentest_gpt.py

Lines changed: 112 additions & 18 deletions
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@
66
from prompts.prompt_class import PentestGPTPrompt
77
from utils.prompt_select import prompt_select, prompt_ask
88
from prompt_toolkit.formatted_text import HTML
9-
from utils.task_handler import main_task_entry, mainTaskCompleter
9+
from utils.task_handler import main_task_entry, mainTaskCompleter, local_task_entry, localTaskCompleter
1010
from utils.web_parser import google_search, parse_web
1111
import time
1212
import datetime as dt
@@ -42,7 +42,7 @@ class pentestGPT:
4242
"default": "The user did not specify the input source. You need to summarize based on the contents.\n",
4343
}
4444

45-
def __init__(self, reasoning_model="gpt-4"):
45+
def __init__(self, reasoning_model="text-davinci-002-render-sha"):
4646
self.log_dir = "logs"
4747
self.chatGPTAgent = ChatGPT(ChatGPTConfig())
4848
self.chatGPT4Agent = ChatGPT(ChatGPTConfig(model=reasoning_model))
@@ -152,9 +152,95 @@ def test_generation_handler(self, text):
152152
self.log_conversation("generation", response)
153153
return response
154154

155+
def local_input_handler(self) -> str:
156+
"""
157+
Request for user's input to handle the local task
158+
"""
159+
local_task_response = ""
160+
self.chat_count += 1
161+
local_request_option = local_task_entry()
162+
self.log_conversation("user", local_request_option)
163+
164+
if local_request_option == "help":
165+
print(localTaskCompleter().task_details)
166+
167+
elif local_request_option == "discuss":
168+
## (1) Request for user multi-line input
169+
self.console.print("Please share your findings and questions with PentestGPT.")
170+
self.log_conversation(
171+
"pentestGPT", "Please share your findings and questions with PentestGPT. (End with <shift + right-arrow>)"
172+
)
173+
user_input = prompt_ask(
174+
"Your input: ", multiline=True
175+
)
176+
self.log_conversation("user", user_input)
177+
## (2) pass the information to the reasoning session.
178+
with self.console.status("[bold green] PentestGPT Thinking...") as status:
179+
local_task_response = self.test_generation_handler(self.prompts.local_task_prefix + user_input)
180+
## (3) print the results
181+
self.console.print("PentestGPT:\n", style="bold green")
182+
self.console.print(local_task_response + "\n", style="yellow")
183+
self.log_conversation("pentestGPT", local_task_response)
184+
185+
elif local_request_option == "brainstorm":
186+
## (1) Request for user multi-line input
187+
self.console.print("Please share your concerns and questions with PentestGPT.")
188+
self.log_conversation(
189+
"pentestGPT", "Please share your concerns and questions with PentestGPT. End with <shift + right-arrow>)"
190+
)
191+
user_input = prompt_ask(
192+
"Your input: ", multiline=True
193+
)
194+
self.log_conversation("user", user_input)
195+
## (2) pass the information to the reasoning session.
196+
with self.console.status("[bold green] PentestGPT Thinking...") as status:
197+
local_task_response = self.test_generation_handler(self.prompts.local_task_brainstorm + user_input)
198+
## (3) print the results
199+
self.console.print("PentestGPT:\n", style="bold green")
200+
self.console.print(local_task_response + "\n", style="yellow")
201+
self.log_conversation("pentestGPT", local_task_response)
202+
203+
204+
elif local_request_option == "google":
205+
# get the users input
206+
self.console.print(
207+
"Please enter your search query. PentestGPT will summarize the info from google. (End with <shift + right-arrow>) ",
208+
style="bold green",
209+
)
210+
self.log_conversation(
211+
"pentestGPT",
212+
"Please enter your search query. PentestGPT will summarize the info from google.",
213+
)
214+
user_input = prompt_ask(
215+
"Your input: ", multiline=False
216+
)
217+
self.log_conversation("user", user_input)
218+
with self.console.status("[bold green] PentestGPT Thinking...") as status:
219+
# query the question
220+
result: dict = google_search(user_input, 5) # 5 results by default
221+
# summarize the results
222+
# TODO
223+
local_task_response = "Google search results:\n" + "still under development."
224+
self.console.print(local_task_response + "\n", style="yellow")
225+
self.log_conversation("pentestGPT", local_task_response)
226+
return local_task_response
227+
228+
elif local_request_option == "continue":
229+
self.console.print("Exit the local task and continue the main task.")
230+
self.log_conversation("pentestGPT", "Exit the local task and continue the main task.")
231+
local_task_response = "continue"
232+
233+
return local_task_response
234+
235+
155236
def input_handler(self) -> str:
156237
"""
157-
Request for user's input to: (1) input test results, (2) ask for todos, (3) input other information, (4) end.
238+
Request for user's input to:
239+
(1) input test results,
240+
(2) ask for todos,
241+
(3) input other information (discuss),
242+
(4) google.
243+
(4) end.
158244
The design details are based on PentestGPT_design.md
159245
160246
Return
@@ -166,16 +252,6 @@ def input_handler(self) -> str:
166252

167253
request_option = main_task_entry()
168254
self.log_conversation("user", request_option)
169-
# request_option = prompt_select(
170-
# title=f"({self.chat_count}) > Please select your options with cursor: ",
171-
# values=[
172-
# ("1", HTML('<style fg="cyan">Input test results</style>')),
173-
# ("2", HTML('<style fg="cyan">Ask for todos</style>')),
174-
# ("3", HTML('<style fg="cyan">Discuss with PentestGPT</style>')),
175-
# ("4", HTML('<style fg="cyan">Exit</style>')),
176-
# ],
177-
# )
178-
# pass output
179255

180256
if request_option == "help":
181257
print(mainTaskCompleter().task_details)
@@ -222,7 +298,7 @@ def input_handler(self) -> str:
222298
# generate more test details (beginner mode)
223299
elif request_option == "more":
224300
self.log_conversation("user", "more")
225-
## (1) pass the reasoning results to the test_generation session.
301+
## (1) check if reasoning session is initialized
226302
if self.step_reasoning_response is None:
227303
self.console.print(
228304
"You have not initialized the task yet. Please perform the basic testing following `next` option.",
@@ -231,10 +307,20 @@ def input_handler(self) -> str:
231307
response = "You have not initialized the task yet. Please perform the basic testing following `next` option."
232308
self.log_conversation("pentestGPT", response)
233309
return response
310+
## (2) start local task generation.
311+
### (2.1) ask the reasoning session to analyze the current situation, and explain the task
312+
self.console.print("PentestGPT will generate more test details, and enter the sub-task generation mode. (Pressing Enter to continue)", style="bold green")
313+
self.log_conversation("pentestGPT", "PentestGPT will generate more test details, and enter the sub-task generation mode.")
314+
input()
315+
316+
### (2.2) pass the sub-tasks to the test generation session
234317
with self.console.status("[bold green] PentestGPT Thinking...") as status:
235318
generation_response = self.test_generation_handler(
236319
self.step_reasoning_response
237320
)
321+
_local_init_response = self.test_generation_handler(
322+
self.prompts.local_task_init
323+
)
238324

239325
self.console.print(
240326
"Below are the further details.",
@@ -244,6 +330,14 @@ def input_handler(self) -> str:
244330
response = generation_response
245331
self.log_conversation("pentestGPT", response)
246332

333+
### (2.3) local task handler
334+
335+
while True:
336+
local_task_response = self.local_input_handler()
337+
if local_task_response == "continue":
338+
# break the local task handler
339+
break
340+
247341
# ask for task list (to-do list)
248342
elif request_option == "todo":
249343
## log that user is asking for todo list
@@ -278,12 +372,12 @@ def input_handler(self) -> str:
278372
# pass other information, such as questions or some observations.
279373
elif request_option == "discuss":
280374
## (1) Request for user multi-line input
281-
self.console.print("Please share your thoughts/questions with PentestGPT.")
375+
self.console.print("Please share your thoughts/questions with PentestGPT. (End with <shift + right-arrow>) ")
282376
self.log_conversation(
283377
"pentestGPT", "Please share your thoughts/questions with PentestGPT."
284378
)
285379
user_input = prompt_ask(
286-
"(End with <shift + right-arrow>) Your input: ", multiline=True
380+
"Your input: ", multiline=True
287381
)
288382
self.log_conversation("user", user_input)
289383
## (2) pass the information to the reasoning session.
@@ -298,15 +392,15 @@ def input_handler(self) -> str:
298392
elif request_option == "google":
299393
# get the users input
300394
self.console.print(
301-
"Please enter your search query. PentestGPT will summarize the info from google.",
395+
"Please enter your search query. PentestGPT will summarize the info from google. (End with <shift + right-arrow>) ",
302396
style="bold green",
303397
)
304398
self.log_conversation(
305399
"pentestGPT",
306400
"Please enter your search query. PentestGPT will summarize the info from google.",
307401
)
308402
user_input = prompt_ask(
309-
"(End with <shift + right-arrow>) Your input: ", multiline=False
403+
"Your input: ", multiline=False
310404
)
311405
self.log_conversation("user", user_input)
312406
with self.console.status("[bold green] PentestGPT Thinking...") as status:

utils/task_handler.py

Lines changed: 49 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -10,6 +10,43 @@
1010
from prompt_toolkit.shortcuts import CompleteStyle, prompt
1111

1212

13+
class localTaskCompleter(Completer):
14+
tasks = [
15+
"discuss", # discuss with pentestGPT on the local task
16+
"brainstorm", # let pentestGPT brainstorm on the local task
17+
"help", # show the help page (for this local task)
18+
"google", # search on Google
19+
"continue", # quit the local task (for this local task)
20+
]
21+
22+
task_meta = {
23+
"discuss": HTML("Discuss with <b>PentestGPT</b> about this local task."),
24+
"brainstorm": HTML("Let <b>PentestGPT</b> brainstorm on the local task for all the possible solutions."),
25+
"help": HTML("Show the help page for this local task."),
26+
"google": HTML("Search on Google."),
27+
"continue": HTML("Quit the local task and continue the previous testing."),
28+
}
29+
30+
task_details = """
31+
Below are the available tasks:
32+
- discuss: Discuss with PentestGPT about this local task.
33+
- brainstorm: Let PentestGPT brainstorm on the local task for all the possible solutions.
34+
- help: Show the help page for this local task.
35+
- google: Search on Google.
36+
- quit: Quit the local task and continue the testing."""
37+
38+
def get_completions(self, document, complete_event):
39+
word = document.get_word_before_cursor()
40+
for task in self.tasks:
41+
if task.startswith(word):
42+
yield Completion(
43+
task,
44+
start_position=-len(word),
45+
display=task,
46+
display_meta=self.task_meta.get(task),
47+
)
48+
49+
1350
class mainTaskCompleter(Completer):
1451
tasks = [
1552
"next",
@@ -65,6 +102,18 @@ def main_task_entry(text="> "):
65102
else:
66103
return result
67104

105+
def local_task_entry(text="> "):
106+
"""
107+
Entry point for the task prompt. Auto-complete
108+
"""
109+
task_completer = localTaskCompleter()
110+
while True:
111+
result = prompt(text, completer=task_completer)
112+
if result not in task_completer.tasks:
113+
print("Invalid task, try again.")
114+
else:
115+
return result
116+
68117

69118
if __name__ == "__main__":
70119
main_task_entry()

0 commit comments

Comments
 (0)