Skip to content

Commit 641df8d

Browse files
committed
feat: 🎸 Major TUI update
Add TUI multi-line input; optimize prompts; optimize scripts.
1 parent 80386c6 commit 641df8d

File tree

6 files changed

+201
-87
lines changed

6 files changed

+201
-87
lines changed

README.md

Lines changed: 11 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,5 @@
11
# PentestGPT
2-
v0.1, 09/04/2023
2+
v0.2, 12/04/2023
33

44
## Introduction
55
**PentestGPT** is a penetration testing tool empowered by **ChatGPT**. It is designed to automate the penetration testing process. It is built on top of ChatGPT and operate in an interactive mode to guide penetration testers in both overall progress and specific operations.
@@ -15,14 +15,13 @@ The project is still in its early stage. Feel free to raise any issues when usin
1515

1616

1717

18-
## Examples
18+
## Usage
1919
1. To start, run `python3 main.py`.
2020
2. The tool works similar to *msfconsole*. Follow the guidance to perform penetration testing.
21+
3. In general, PentestGPT intakes commands similar to chatGPT.
22+
- To intake multi-line inputs in the terminal, please use <Enter> for new line, and <Shift+Right-Arror> to submit the input.
23+
- The selection bar allows you to select a pre-defined options.
2124

22-
## Development
23-
- [x] Add chunk processing (04/03/2023)
24-
- [ ] Add prompt optimization
25-
- [ ] Test scenarios beyond web testing
2625

2726
## Design Documentation
2827
The current design is mainly for web penetration testing
@@ -43,8 +42,14 @@ The handler is the main entry point of the penetration testing tool. It allows p
4342
2. Pass a webpage content.
4443
3. Pass a human description.
4544

45+
## Update history
46+
### v0.2
47+
- A major update to improve the terminal usage
48+
- Prompt optimization.
49+
4650

4751
### System Design
52+
More details in `PentestGPT_design.md`.
4853

4954
#### Logic Flow Design
5055
1. User initializes all the sessions. (**prompt**)

prompts/prompt_class.py

Lines changed: 6 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -19,7 +19,7 @@ class PentestGPTPrompt:
1919
1. Decide to remove some tasks if they are considered as completed.
2020
2. Decide to add a new task if there's something interesting.
2121
3. Give scores to each subtasks, showing if it can lead to a potential vulnerability.
22-
4. Recommend what to do next based on the scores when asked about to-do, listed in sequence as 1, 2, 3, ...
22+
4. Add a dividing line (--------). Then recommend what to do next based on the scores when asked about to-do, listed in sequence as 1, 2, 3, ...
2323
Note that you should keep the tasks clear, precise and short due to token size limit. You should remember to remove redundant/outdated tasks due to the same reason.
2424
I'll provide the general penetration test information to you shortly."""
2525
input_parsing_init: str = """I want you to be a help penetration testers for penetration testing by summarizing the contents from the web pages and security tools outputs. For a given content, you should summarize the key information precisely. In particular,
@@ -36,14 +36,15 @@ class PentestGPTPrompt:
3636
You should provide it in a way as if you're asking another penetration tester to execute it. You should always provide the concrete IP address as target"""
3737

3838
process_results: str = """Here's the test summary from the penetration tester. Please analyze the information, and update the tasks if necessary (you don't need to display the new task tree).
39-
After this, please give one task for the tester to do next.\n"""
39+
After this, please give one task for the tester to do next.\n\n"""
4040

4141
ask_todo: str = """Please think about the previous information step by step, and analyze the information.
4242
Then, please list the most possible sub-tasks (no more than 2) that you think we should proceed to work on next."""
4343

44-
discussion: str = """The tester provides the following thoughts for your consideration. Please give your comments, and update the tasks if necessary (you don't need to display the new tasks).\n"""
44+
discussion: str = """The tester provides the following thoughts for your consideration. Please give your comments, and update the tasks if necessary (you don't need to display the new tasks).\n\n"""
4545

4646
# generation session
4747
todo_to_command: str = """You're asked to explain the following tasks to a junior penetration tester.
48-
Please provide the command to execute, or the GUI operations to perform. You should always provide the concrete IP address as target.
49-
If it is a single command to execute, please be precise; if it is a multi-step task, you need to explain it step by step, and keep each step clear and simple."""
48+
You're provided with a long input from the supervisor GPT model. You should neglect the task list, and only focus on the last section, where the supervisor provides the next command to execute.
49+
Please extend the command to execute, or the GUI operations to perform, so that a junior penetration tester can understand. You should always provide the concrete IP address as target.
50+
If it is a single command to execute, please be precise; if it is a multi-step task, you need to explain it step by step, and keep each step clear and simple. The information is below: \n\n"""

requirements.txt

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -7,4 +7,5 @@ requests
77
loguru
88
beautifulsoup4~=4.11.2
99
colorama
10-
rich
10+
rich
11+
prompt-toolkit

utils/chatgpt.py

Lines changed: 4 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -101,7 +101,9 @@ def _parse_message_raw_output(self, response: requests.Response):
101101
result = json.loads(last_line[5:])
102102
return result
103103

104-
def send_new_message(self, message):
104+
def send_new_message(self, message, model=None):
105+
if model is None:
106+
model = self.model
105107
# 发送新会话窗口消息,返回会话id
106108
logger.info(f"send_new_message")
107109
url = "https://chat.openai.com/backend-api/conversation"
@@ -116,7 +118,7 @@ def send_new_message(self, message):
116118
}
117119
],
118120
"parent_message_id": str(uuid1()),
119-
"model": self.model,
121+
"model": model,
120122
}
121123
start_time = time.time()
122124
message: Message = Message()

utils/pentest_gpt.py

Lines changed: 75 additions & 73 deletions
Original file line numberDiff line numberDiff line change
@@ -2,9 +2,10 @@
22
from config.chatgpt_config import ChatGPTConfig
33
from rich.spinner import Spinner
44
from utils.chatgpt import ChatGPT
5-
from rich.prompt import Prompt
65
from rich.console import Console
76
from prompts.prompt_class import PentestGPTPrompt
7+
from utils.prompt_select import prompt_select, prompt_ask
8+
from prompt_toolkit.formatted_text import HTML
89

910
import loguru
1011
import time, os, textwrap
@@ -13,16 +14,33 @@
1314
logger.add(sink="logs/pentest_gpt.log")
1415

1516

17+
def prompt_continuation(width, line_number, wrap_count):
18+
"""
19+
The continuation: display line numbers and '->' before soft wraps.
20+
Notice that we can return any kind of formatted text from here.
21+
The prompt continuation doesn't have to be the same width as the prompt
22+
which is displayed before the first line, but in this example we choose to
23+
align them. The `width` input that we receive here represents the width of
24+
the prompt.
25+
"""
26+
if wrap_count > 0:
27+
return " " * (width - 3) + "-> "
28+
else:
29+
text = ("- %i - " % (line_number + 1)).rjust(width)
30+
return HTML("<strong>%s</strong>") % text
31+
32+
1633
class pentestGPT:
1734
postfix_options = {
18-
"default": "The user did not specify the input source. You need to summarize based on the contents.\n",
19-
"user-comments": "The input content is from user comments.\n",
2035
"tool": "The input content is from a security testing tool. You need to list down all the points that are interesting to you; you should summarize it as if you are reporting to a senior penetration tester for further guidance.\n",
36+
"user-comments": "The input content is from user comments.\n",
2137
"web": "The input content is from web pages. You need to summarize the readable-contents, and list down all the points that can be interesting for penetration testing.\n",
38+
"default": "The user did not specify the input source. You need to summarize based on the contents.\n",
2239
}
2340

2441
def __init__(self):
2542
self.chatGPTAgent = ChatGPT(ChatGPTConfig())
43+
self.chatGPT4Agent = ChatGPT(ChatGPTConfig(model="gpt-4"))
2644
self.prompts = PentestGPTPrompt
2745
self.console = Console()
2846
self.spinner = Spinner("line", "Processing")
@@ -41,12 +59,12 @@ def initialize(self):
4159
text_0,
4260
self.test_generation_session_id,
4361
) = self.chatGPTAgent.send_new_message(
44-
self.prompts.generation_session_init
62+
self.prompts.generation_session_init,
4563
)
4664
(
4765
text_1,
4866
self.test_reasoning_session_id,
49-
) = self.chatGPTAgent.send_new_message(
67+
) = self.chatGPT4Agent.send_new_message(
5068
self.prompts.reasoning_session_init
5169
)
5270
(
@@ -55,43 +73,14 @@ def initialize(self):
5573
) = self.chatGPTAgent.send_new_message(self.prompts.input_parsing_init)
5674
except Exception as e:
5775
logger.error(e)
58-
59-
def _ask(self, text="> ", multiline=True) -> str:
60-
"""
61-
A handler for Prompt.ask. It can intake multiple lines. Ideally for tool outputs and web contents
62-
63-
Parameters
64-
----------
65-
text : str, optional
66-
The prompt text, by default "> "
67-
multiline : bool, optional
68-
Whether to allow multiline input, by default True
69-
70-
Returns
71-
-------
72-
str
73-
The user input
74-
"""
75-
if not multiline:
76-
return self.console.input(text)
77-
response = [self.console.input(text)]
78-
while True:
79-
try:
80-
user_input = self.console.input("")
81-
response.append(user_input)
82-
except EOFError:
83-
break
84-
except KeyboardInterrupt:
85-
break
86-
response = "\n".join(response)
87-
return response
76+
self.console.print("- ChatGPT Sessions Initialized.", style="bold green")
8877

8978
def reasoning_handler(self, text) -> str:
9079
# summarize the contents if necessary.
9180
if len(text) > 8000:
9281
text = self.input_parsing_handler(text)
9382
# pass the information to reasoning_handler and obtain the results
94-
response = self.chatGPTAgent.send_message(
83+
response = self.chatGPT4Agent.send_message(
9584
self.prompts.process_results + text, self.test_reasoning_session_id
9685
)
9786
return response
@@ -113,7 +102,7 @@ def input_parsing_handler(self, text, source=None) -> str:
113102
for wrapped_input in wrapped_inputs:
114103
word_limit = f"Please ensure that the input is less than {8000 / len(wrapped_inputs)} words.\n"
115104
summarized_content += self.chatGPTAgent.send_message(
116-
prefix + word_limit + text, self.input_parsing_session_id
105+
prefix + word_limit + wrapped_input, self.input_parsing_session_id
117106
)
118107
return summarized_content
119108

@@ -133,35 +122,39 @@ def input_handler(self) -> str:
133122
response: str
134123
The response from the chatGPT model.
135124
"""
136-
request_option = Prompt.ask(
137-
"> How can I help? 1)Input results 2)Todos, 3)Other info, 4)End",
138-
choices=["1", "2", "3", "4"],
139-
default="1",
125+
request_option = prompt_select(
126+
title="> Please select your options with cursor: ",
127+
values=[
128+
("1", HTML('<style fg="cyan">Input test results</style>')),
129+
("2", HTML('<style fg="cyan">Ask for todos</style>')),
130+
("3", HTML('<style fg="cyan">Discuss with PentestGPT</style>')),
131+
("4", HTML('<style fg="cyan">Exit</style>')),
132+
],
140133
)
141134
# pass output
142135
if request_option == "1":
143136
## (1) pass the information to input_parsing session.
144-
self.console.print(
145-
"Please describe your findings briefly, followed by the codes/outputs. End with EOF."
146-
)
147137
## Give a option list for user to choose from
148138
options = list(self.postfix_options.keys())
149-
options_str = "\n".join(
150-
[f"{i+1}) {option}" for i, option in enumerate(options)]
139+
value_list = [
140+
(i, HTML(f'<style fg="cyan">{options[i]}</style>'))
141+
for i in range(len(options))
142+
]
143+
source = prompt_select(
144+
title="Please choose the source of the information.", values=value_list
151145
)
152-
source = Prompt.ask(
153-
f"Please choose the source of the information. \n{options_str}",
154-
choices=list(str(x) for x in range(1, len(options) + 1)),
155-
default=1,
156-
)
157-
user_input = self._ask("> ", multiline=True)
158-
parsed_input = self.input_parsing_handler(
159-
user_input, source=options[int(source) - 1]
146+
self.console.print(
147+
"Your input: (End with <shift + right-arrow>)", style="bold green"
160148
)
161-
## (2) pass the summarized information to the reasoning session.
162-
reasoning_response = self.reasoning_handler(parsed_input)
163-
## (3) pass the reasoning results to the test_generation session.
164-
generation_response = self.test_generation_handler(reasoning_response)
149+
user_input = prompt_ask("> ", multiline=True)
150+
with self.console.status("[bold green] PentestGPT Thinking...") as status:
151+
parsed_input = self.input_parsing_handler(
152+
user_input, source=options[int(source)]
153+
)
154+
## (2) pass the summarized information to the reasoning session.
155+
reasoning_response = self.reasoning_handler(parsed_input)
156+
## (3) pass the reasoning results to the test_generation session.
157+
generation_response = self.test_generation_handler(reasoning_response)
165158
## (4) print the results
166159
self.console.print(
167160
"Based on the analysis, the following tasks are recommended:",
@@ -178,11 +171,12 @@ def input_handler(self) -> str:
178171
# ask for sub tasks
179172
elif request_option == "2":
180173
## (1) ask the reasoning session to analyze the current situation, and list the top sub-tasks
181-
reasoning_response = self.reasoning_handler(self.prompts.ask_todo)
182-
## (2) pass the sub-tasks to the test_generation session.
183-
message = self.prompts.todo_to_command + "\n" + reasoning_response
184-
generation_response = self.test_generation_handler(message)
185-
## (3) print the results
174+
with self.console.status("[bold green] PentestGPT Thinking...") as status:
175+
reasoning_response = self.reasoning_handler(self.prompts.ask_todo)
176+
## (2) pass the sub-tasks to the test_generation session.
177+
message = self.prompts.todo_to_command + "\n" + reasoning_response
178+
generation_response = self.test_generation_handler(message)
179+
## (3) print the results
186180
self.console.print(
187181
"Based on the analysis, the following tasks are recommended:",
188182
style="bold green",
@@ -198,10 +192,13 @@ def input_handler(self) -> str:
198192
# pass other information, such as questions or some observations.
199193
elif request_option == "3":
200194
## (1) Request for user multi-line input
201-
self.console.print("Please input your information. End with EOF.")
202-
user_input = self._ask("> ", multiline=True)
195+
self.console.print("Please share your thoughts/questions with PentestGPT.")
196+
user_input = prompt_ask(
197+
"(End with <shift + right-arrow>) Your input: ", multiline=True
198+
)
203199
## (2) pass the information to the reasoning session.
204-
response = self.reasoning_handler(self.prompts.discussion + user_input)
200+
with self.console.status("[bold green] PentestGPT Thinking...") as status:
201+
response = self.reasoning_handler(self.prompts.discussion + user_input)
205202
## (3) print the results
206203
self.console.print("PentestGPT:\n", style="bold green")
207204
self.console.print(response + "\n", style="yellow")
@@ -211,6 +208,10 @@ def input_handler(self) -> str:
211208
response = False
212209
self.console.print("Thank you for using PentestGPT!", style="bold green")
213210

211+
else:
212+
self.console.print("Please key in the correct options.", style="bold red")
213+
response = self.input_handler()
214+
214215
return response
215216

216217
def main(self):
@@ -221,26 +222,27 @@ def main(self):
221222
self.initialize()
222223

223224
# 1. User firstly provide basic information of the task
224-
init_description = Prompt.ask(
225-
"Please describe the penetration testing task in one line, including the target IP, task type, etc."
225+
init_description = prompt_ask(
226+
"Please describe the penetration testing task in one line, including the target IP, task type, etc.\n> ",
227+
multiline=False,
226228
)
227229
## Provide the information to the reasoning session for the task initialization.
228-
init_description = self.prompts.task_description + init_description
230+
prefixed_init_description = self.prompts.task_description + init_description
229231
with self.console.status(
230232
"[bold green] Generating Task Information..."
231233
) as status:
232-
_response = self.reasoning_handler(init_description)
234+
_response = self.reasoning_handler(prefixed_init_description)
235+
self.console.print("- Task information generated. \n", style="bold green")
233236
# 2. Reasoning session generates the first thing to do and provide the information to the generation session
234237
with self.console.status("[bold green]Processing...") as status:
235-
first_todo = self.reasoning_handler(self.prompts.first_todo)
236238
first_generation_response = self.test_generation_handler(
237-
self.prompts.todo_to_command + first_todo
239+
self.prompts.todo_to_command + self.prompts.first_todo
238240
)
239241
# 3. Show user the first thing to do.
240242
self.console.print(
241243
"PentestGPT suggests you to do the following: ", style="bold green"
242244
)
243-
self.console.print(first_todo)
245+
self.console.print(_response)
244246
self.console.print("You may start with:", style="bold green")
245247
self.console.print(first_generation_response)
246248

0 commit comments

Comments
 (0)