Skip to content

Conversation

@PriteshPatel3
Copy link

To allow user to choose any local running model through Ollama

Copy link

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Summary of Changes

Hello @PriteshPatel3, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request enhances the command-line interface by enabling users to specify and utilize any locally running Ollama model. This provides greater flexibility for developers and users who wish to experiment with or integrate different large language models beyond the pre-defined options, directly from the CLI.

Highlights

  • Custom Ollama Model Selection: Introduced a "Custom (enter model name)" option for Ollama models in both the shallow and deep thinking agent selection menus.
  • Interactive Model Input: Implemented a prompt that asks the user to enter their desired Ollama model name when the "Custom" option is chosen.
  • Input Validation: Added validation to ensure that a non-empty model name is provided when the custom option is selected, preventing empty inputs.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in issue comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a valuable feature allowing users to specify their own local Ollama models. The implementation is clear and directly addresses the goal. My review focuses on two main points to enhance the code quality. Firstly, there's an opportunity to reduce code duplication by creating a helper function for the custom model selection logic, which is currently repeated in two places. Secondly, there's a more critical structural issue regarding the use of the console object, which is not defined within the scope of cli/utils.py and could lead to runtime errors. Addressing these points will make the code more maintainable and robust.

validate=lambda x: len(x.strip()) > 0 or "Please enter a valid model name.",
).ask()
if not custom_model:
console.print("\n[red]No model name provided. Exiting...[/red]")

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The console object is used here but it is not defined or imported in cli/utils.py. It appears to be defined as a global in cli/main.py, which will likely cause a NameError at runtime because this module doesn't have access to main.py's globals. This is a critical correctness issue. This anti-pattern is also present in the new code in select_deep_thinking_agent (line 260). The console object should be made available to this module properly, for instance by defining it in a shared location and importing it where needed.

"ollama": [
("llama3.1 local", "llama3.1"),
("llama3.2 local", "llama3.2"),
("Custom (enter model name)", "__custom__"),

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The string "__custom__" is a magic value used for the choice's value and for comparison later. To improve readability and maintainability, it's best to define this as a constant at the module level (e.g., CUSTOM_OLLAMA_MODEL_CHOICE = "__custom__") and use that constant in both places. This would also apply to its usage for the deep thinking agent.

Comment on lines +181 to +190
# If custom is selected, prompt for the model name to use with Ollama
if choice == "__custom__":
custom_model = questionary.text(
"Enter your Ollama model name (e.g., mistral-nemo:latest):",
validate=lambda x: len(x.strip()) > 0 or "Please enter a valid model name.",
).ask()
if not custom_model:
console.print("\n[red]No model name provided. Exiting...[/red]")
exit(1)
return custom_model.strip()

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

This block of code for handling a custom model selection is nearly identical to the one in select_deep_thinking_agent (lines 253-262). To improve maintainability and adhere to the DRY (Don't Repeat Yourself) principle, this logic should be extracted into a private helper function. The example model name could be passed as an argument to this new function.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant