-
Notifications
You must be signed in to change notification settings - Fork 155
feat: add support for field resolvers #1323
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
feat: add support for field resolvers #1323
Conversation
… ludwig/eng-6993-implement-field-resolver-for-graphql-operation
… ludwig/eng-6993-implement-field-resolver-for-graphql-operation
WalkthroughRefactors the gRPC datasource to support field resolvers and dependency-driven execution: adds Path helpers, generic RPCConfigMap and ResolveRPC types, node-based compiler, ServiceCall, CallKind, DependencyGraph with topological resolution, graph-driven DataSource load/merging, extended proto/schema and mock service resolvers, and many tests/tooling updates. Changes
Sequence Diagram(s)sequenceDiagram
participant DataSource
participant DependencyGraph
participant Resolver (caller)
participant GRPC as gRPC Service
Note over DataSource,DependencyGraph: Build graph from RPCExecutionPlan
DataSource->>DependencyGraph: NewDependencyGraph(plan)
DataSource->>DependencyGraph: TopologicalSortResolve(resolver)
alt per-level resolution
DependencyGraph->>Resolver: provide []FetchItem (level chunk)
Resolver->>DataSource: invoke ResolveChunk(fetches)
DataSource->>GRPC: concurrent ServiceCall(s)
GRPC-->>DataSource: responses
DataSource->>DependencyGraph: SetFetchData(index, serviceCall)
end
Note over DataSource: After all levels resolved
DataSource->>DataSource: merge results (by path or value)
DataSource-->>Caller: final merged response
Estimated code review effort🎯 4 (Complex) | ⏱️ ~75 minutes Possibly related PRs
Pre-merge checks and finishing touches✅ Passed checks (3 passed)
✨ Finishing touches
🧪 Generate unit tests (beta)
Comment |
@coderabbitai review |
✅ Actions performedReview triggered.
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 14
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (6)
v2/pkg/grpctest/mockservice.go (1)
946-959
: Nil guard for filter to prevent panicfilter is used without a nil check; a nil filter will panic when accessing fields.
Apply:
func (s *MockService) QueryTypeWithMultipleFilterFields(ctx context.Context, in *productv1.QueryTypeWithMultipleFilterFieldsRequest) (*productv1.QueryTypeWithMultipleFilterFieldsResponse, error) { - filter := in.GetFilter() + filter := in.GetFilter() + if filter == nil { + return &productv1.QueryTypeWithMultipleFilterFieldsResponse{ + TypeWithMultipleFilterFields: []*productv1.TypeWithMultipleFilterFields{}, + }, nil + } var fields []*productv1.TypeWithMultipleFilterFieldsv2/pkg/grpctest/schema.go (2)
44-47
: Compile-time bug: wrapping non-error with %woperationreport.Report is not an error; using %w with
report
will not compile. Wrapreport.Error()
instead.- if report.HasErrors() { - return ast.Document{}, fmt.Errorf("failed to parse schema: %w", report) - } + if report.HasErrors() { + return ast.Document{}, fmt.Errorf("failed to parse schema: %w", report.Error()) + }
53-56
: Compile-time bug: wrapping non-error with %wSame issue during validation. Use
report.Error()
.- astvalidation.DefaultDefinitionValidator().Validate(&doc, &report) - if report.HasErrors() { - return ast.Document{}, fmt.Errorf("failed to validate schema: %w", report) - } + astvalidation.DefaultDefinitionValidator().Validate(&doc, &report) + if report.HasErrors() { + return ast.Document{}, fmt.Errorf("failed to validate schema: %w", report.Error()) + }v2/pkg/engine/datasource/grpc_datasource/execution_plan.go (1)
267-275
: Bug: returning pointer to range copy in RPCFields.ByName&field points to a copy, not the slice element. Callers mutating it won’t affect the original.
func (r RPCFields) ByName(name string) *RPCField { - for _, field := range r { - if field.Name == name { - return &field - } - } + for i := range r { + if r[i].Name == name { + return &r[i] + } + } return nil }v2/pkg/engine/datasource/grpc_datasource/grpc_datasource_test.go (1)
223-231
: Malformed request payload: double-wrapped “variables”variables already includes {"variables": ...}. You’re adding another "variables" key, producing {"variables":{"variables":...}}. Align with other tests and pass “body” containing variables, or pass only the inner object.
- output := new(bytes.Buffer) - err = ds.Load(context.Background(), []byte(`{"query":"`+query+`","variables":`+variables+`}`), output) + output := new(bytes.Buffer) + // Option A: use body wrapper + err = ds.Load(context.Background(), []byte(`{"query":"`+query+`","body":`+variables+`}`), output) + // Option B: if Load expects top-level "variables", change variables to the inner object and use that: + // vars := `{"filter":{"name":"test","filterField1":"test","filterField2":"test"}}` + // err = ds.Load(context.Background(), []byte(`{"query":"`+query+`","variables":`+vars+`}`), output)v2/pkg/engine/datasource/grpc_datasource/compiler.go (1)
1156-1174
: Fix: Service.MethodsRefs indexes are wrong (need global offset)
methodsRefs
currently use local indices [0..n), butresolveServiceName
indexesdoc.Methods
by these refs. This yields wrong lookups/panics when multiple services exist.func (p *RPCCompiler) parseService(s protoref.ServiceDescriptor) Service { name := string(s.Name()) m := s.Methods() methods := make([]Method, 0, m.Len()) methodsRefs := make([]int, 0, m.Len()) - for j := 0; j < m.Len(); j++ { - methods = append(methods, p.parseMethod(m.Get(j))) - methodsRefs = append(methodsRefs, j) - } + // compute base index before appending + offset := len(p.doc.Methods) + for j := 0; j < m.Len(); j++ { + methods = append(methods, p.parseMethod(m.Get(j))) + methodsRefs = append(methodsRefs, offset+j) + } // Add the methods to the Document p.doc.Methods = append(p.doc.Methods, methods...) return Service{ Name: name, FullName: string(s.FullName()), MethodsRefs: methodsRefs, } }
🧹 Nitpick comments (40)
v2/pkg/grpctest/Readme.md (5)
68-84
: Add a language to the fenced code block (markdownlint MD040).Specify a language for the file tree block (use text) to satisfy linters.
-``` +```text grpctest/ ├── Readme.md # This documentation ├── Makefile # Build and generation commands ├── mockservice.go # Main mock service implementation ├── schema.go # Schema loading utilities ├── product.proto # Protocol buffer definitions ├── mapping/ │ └── mapping.go # GraphQL-to-gRPC field mappings ├── productv1/ # Generated protobuf Go code │ ├── product.pb.go │ └── product_grpc.pb.go ├── testdata/ # Test schemas and data │ ├── products.graphqls # GraphQL schema ├── plugin/ # Plugin service implementation └── cmd/ # Command-line utilities -``` +```
142-145
: Grammar/terminology: “Set up” and “VS Code”.Prefer “Set up a Run Configuration in VS Code”.
-### Setup a Run Configuration in VSCode +### Set up a Run Configuration in VS Code
59-61
: Clarify “cosmo” prerequisite or link to it.Readers won’t know what “cosmo” is. Add one line explaining what it is, why it’s needed, and where to get it (repo/path).
48-66
: Add a “Prerequisites” subsection (tools and versions).List required tools to run the make targets (e.g., Go toolchain version, protoc, buf/protocompile or plugins, wgc CLI, any local “cosmo” checkout), and ideally pin minimal versions for reproducibility.
Proposed snippet to insert above “Code Generation”:
### Prerequisites - Go: >= 1.xx - protoc: >= 3/4.x with Go plugins - buf / protocompile (if used by the Makefile targets) - wgc CLI: <version> (for regenerate-proto / generate-all) - Optional: local “cosmo” checkout (see below) for regenerate-proto-local
1-1
: Rename Readme.md to README.md in v2/pkg/grpctest to match the existing convention and ensure consistent GitHub rendering.v2/pkg/grpctest/product.proto (6)
851-851
: Reserve skipped field number in Category for wire compatibilityYou added subcategories as field 5, leaving 4 unused. If 4 was ever used, reserve it to avoid accidental reuse.
Apply inside message Category:
message Category { string id = 1; string name = 2; CategoryKind kind = 3; + reserved 4; ListOfSubcategory subcategories = 5; }
660-660
: Align max_price type with Product.price and allow optionalityprice is double; using int32 for max_price can truncate and surprises. Prefer DoubleValue for optional arg.
-message ResolveProductRecommendedCategoryArgs { - int32 max_price = 1; -} +message ResolveProductRecommendedCategoryArgs { + google.protobuf.DoubleValue max_price = 1; +}
1049-1055
: Use google.protobuf.Timestamp for timestamp fieldsString timestamps lose type safety and time semantics. Prefer Timestamp.
message CategoryMetrics { string id = 1; string metric_type = 2; double value = 3; - string timestamp = 4; + google.protobuf.Timestamp timestamp = 4; string category_id = 5; }Also add the import at the top:
import "google/protobuf/wrappers.proto"; +import "google/protobuf/timestamp.proto";
635-657
: Document 1:1 ordering between context and result for batched resolversEach resolver uses repeated context and a parallel result list. Add comments clarifying that result length equals context length and ordering is 1:1 to avoid ambiguity.
Example for one pair:
message ResolveProductRecommendedCategoryRequest { - // context provides the resolver context for the field recommendedCategory of type Product. + // context provides the resolver context for the field recommendedCategory of type Product. + // Order matters: the N-th result in the response corresponds to the N-th context here. repeated ResolveProductRecommendedCategoryContext context = 1; // field_args provides the arguments for the resolver field recommendedCategory of type Product. ResolveProductRecommendedCategoryArgs field_args = 2; } message ResolveProductRecommendedCategoryResponse { - repeated ResolveProductRecommendedCategoryResult result = 1; + // Must contain exactly one entry per input context, in the same order. + repeated ResolveProductRecommendedCategoryResult result = 1; }Please replicate for the other resolver request/response pairs.
Also applies to: 669-682, 693-706, 716-729, 740-753, 763-776
1109-1114
: Enum naming: EXPRESS doesn’t describe a destinationEXPRESS denotes speed, not destination. Consider either:
- Rename to SHIPPING_DESTINATION_PRIORITY (or similar), or
- Split speed into a separate enum/field (e.g., ShippingSpeed) and keep destination as DOMESTIC/INTERNATIONAL.
731-753
: Consider enum for metric_type for type-safetymetric_type is a free-form string. If the set is finite, define an enum (e.g., CategoryMetricType) to prevent invalid values and enable better codegen ergonomics.
v2/pkg/grpctest/Makefile (5)
1-1
: Set shell with strict flags; use mkfile_dir consistently.Add bash + pipefail for safer recipes and use mkfile_dir for paths to avoid CWD issues.
mkfile_dir := $(shell dirname $(abspath $(lastword $(MAKEFILE_LIST)))) + +# Ensure predictable shell behavior +SHELL := bash +.SHELLFLAGS := -eu -o pipefail -c
24-29
: Ensure mapping directory exists; small robustness tweak.If mapping/ does not exist, go run may fail to create mapping.go.
generate-mapping-code: regenerate-proto - go run $(mkfile_dir)/cmd/mapping_helper/main.go $(mkfile_dir)/testdata/mapping.json $(mkfile_dir)/mapping/mapping.go + mkdir -p $(mkfile_dir)/mapping + go run $(mkfile_dir)/cmd/mapping_helper/main.go $(mkfile_dir)/testdata/mapping.json $(mkfile_dir)/mapping/mapping.go go fmt $(mkfile_dir)/mapping/mapping.go rm -f $(mkfile_dir)/testdata/mapping.json
31-34
: Local CLI path and env coupling; pin versions and guard existence.The tsx path and .env outside this repo can break CI. Suggest:
- Parameterize the CLI path via a variable (e.g., COSMO_CLI?=../../../../cosmo/cli/src/index.ts).
- Pin tsx and CLI versions (avoid floating latest downstream).
- Guard with a helpful error if the path/env is missing.
Would you confirm:
- The relative path resolves in CI?
- The .env file is available or optional for generation?
35-39
: Mirror robustness for local mapping; avoid duplication later.
- Create mapping dir before writing.
- Consider deduping the mapping steps between local/remote targets (same three commands).
generate-mapping-code-local: regenerate-proto-local - go run $(mkfile_dir)/cmd/mapping_helper/main.go $(mkfile_dir)/testdata/mapping.json $(mkfile_dir)/mapping/mapping.go + mkdir -p $(mkfile_dir)/mapping + go run $(mkfile_dir)/cmd/mapping_helper/main.go $(mkfile_dir)/testdata/mapping.json $(mkfile_dir)/mapping/mapping.go go fmt $(mkfile_dir)/mapping/mapping.go rm -f $(mkfile_dir)/testdata/mapping.jsonIf you want, I can propose a small helper target to DRY these steps across both flows.
42-46
: Add default ‘all’, ‘clean’, and ‘test’ targets to satisfy checkmake and dev ergonomics.Static analysis flags missing all/clean/test. Add simple aliases and a default goal.
.PHONY: generate-all-local generate-all-local: generate-mapping-code-local generate-proto .PHONY: generate-all generate-all: generate-mapping-code generate-proto + +# Developer ergonomics and CI helpers +.PHONY: all clean test +.DEFAULT_GOAL := all +all: generate-all + +clean: + rm -rf $(mkfile_dir)/productv1 \ + $(mkfile_dir)/mapping/mapping.go + # Keep proto inputs in testdata; do not delete them here. + +test: + @cd $(mkfile_dir) && go test ./...Based on static analysis.
v2/pkg/grpctest/mockservice.go (3)
312-342
: Harden nested converters against nil inner elementsBoth converters assume non‑nil inner list items; nils cause panics. Make inner loops nil‑safe.
convertNestedUserInputsToUsers:
func convertNestedUserInputsToUsers(nestedInputs *productv1.ListOfListOfUserInput) *productv1.ListOfListOfUser { if nestedInputs == nil || nestedInputs.List == nil { return &productv1.ListOfListOfUser{ List: &productv1.ListOfListOfUser_List{ Items: []*productv1.ListOfUser{}, }, } } - results := make([]*productv1.ListOfUser, len(nestedInputs.List.Items)) - for i, userList := range nestedInputs.List.Items { - users := make([]*productv1.User, len(userList.List.Items)) - for j, userInput := range userList.List.Items { - users[j] = &productv1.User{ - Id: fmt.Sprintf("nested-user-%d-%d", i, j), - Name: userInput.GetName(), - } - } - results[i] = &productv1.ListOfUser{ - List: &productv1.ListOfUser_List{ - Items: users, - }, - } - } + results := make([]*productv1.ListOfUser, len(nestedInputs.List.Items)) + for i, userList := range nestedInputs.List.Items { + var inItems []*productv1.UserInput + if userList != nil && userList.List != nil && userList.List.Items != nil { + inItems = userList.List.Items + } + users := make([]*productv1.User, 0, len(inItems)) + for j, userInput := range inItems { + if userInput == nil { + continue + } + users = append(users, &productv1.User{ + Id: fmt.Sprintf("nested-user-%d-%d", i, j), + Name: userInput.GetName(), + }) + } + results[i] = &productv1.ListOfUser{ + List: &productv1.ListOfUser_List{ + Items: users, + }, + } + } return &productv1.ListOfListOfUser{ List: &productv1.ListOfListOfUser_List{ Items: results, }, } }convertNestedCategoryInputsToCategories:
func convertNestedCategoryInputsToCategories(nestedInputs *productv1.ListOfListOfCategoryInput) *productv1.ListOfListOfCategory { if nestedInputs == nil || nestedInputs.List == nil { return &productv1.ListOfListOfCategory{ List: &productv1.ListOfListOfCategory_List{ Items: []*productv1.ListOfCategory{}, }, } } - results := make([]*productv1.ListOfCategory, len(nestedInputs.List.Items)) - for i, categoryList := range nestedInputs.List.Items { - categories := make([]*productv1.Category, len(categoryList.List.Items)) - for j, categoryInput := range categoryList.List.Items { - categories[j] = &productv1.Category{ - Id: fmt.Sprintf("nested-cat-%d-%d", i, j), - Name: categoryInput.GetName(), - Kind: categoryInput.GetKind(), - Subcategories: createSubcategories(fmt.Sprintf("nested-cat-%d-%d", i, j), categoryInput.GetKind(), j+1), - } - } - results[i] = &productv1.ListOfCategory{ - List: &productv1.ListOfCategory_List{ - Items: categories, - }, - } - } + results := make([]*productv1.ListOfCategory, len(nestedInputs.List.Items)) + for i, categoryList := range nestedInputs.List.Items { + var inItems []*productv1.CategoryInput + if categoryList != nil && categoryList.List != nil && categoryList.List.Items != nil { + inItems = categoryList.List.Items + } + categories := make([]*productv1.Category, 0, len(inItems)) + for j, categoryInput := range inItems { + if categoryInput == nil { + continue + } + categories = append(categories, &productv1.Category{ + Id: fmt.Sprintf("nested-cat-%d-%d", i, j), + Name: categoryInput.GetName(), + Kind: categoryInput.GetKind(), + Subcategories: createSubcategories(fmt.Sprintf("nested-cat-%d-%d", i, j), categoryInput.GetKind(), j+1), + }) + } + results[i] = &productv1.ListOfCategory{ + List: &productv1.ListOfCategory_List{ + Items: categories, + }, + } + } return &productv1.ListOfListOfCategory{ List: &productv1.ListOfListOfCategory_List{ Items: results, }, } }Also applies to: 344-376
1093-1116
: Reuse helper and fix local var typoInline subcategory construction duplicates createSubcategories and uses a misspelled var name. Reuse the helper for consistency and clarity.
for i := 1; i <= 3; i++ { - - subcategoties := make([]*productv1.Subcategory, 0, i) - for j := 1; j <= i; j++ { - subcategoties = append(subcategoties, &productv1.Subcategory{ - Id: fmt.Sprintf("%s-subcategory-%d", kind.String(), j), - Name: fmt.Sprintf("%s Subcategory %d", kind.String(), j), - Description: &wrapperspb.StringValue{Value: fmt.Sprintf("%s Subcategory %d", kind.String(), j)}, - IsActive: true, - }) - } - categories = append(categories, &productv1.Category{ Id: fmt.Sprintf("%s-category-%d", kind.String(), i), Name: fmt.Sprintf("%s Category %d", kind.String(), i), Kind: kind, - Subcategories: &productv1.ListOfSubcategory{ - List: &productv1.ListOfSubcategory_List{ - Items: subcategoties, - }, - }, + Subcategories: createSubcategories(fmt.Sprintf("%s-category-%d", kind.String(), i), kind, i), }) }
710-745
: Make randomness deterministic for testsrand.Intn usage makes outputs non‑deterministic and can cause flaky tests. Prefer a seeded rand.Rand on MockService (e.g., dependency‑injected) or seed locally with a fixed value for test runs.
Also applies to: 986-1020
v2/pkg/ast/path.go (2)
84-89
: WithPathItem is fine; optionally simplify with appendCurrent copy+assign works. For readability you could do a single append on a copied slice.
Apply if you prefer:
-func (p Path) WithPathItem(element PathItem) Path { - res := make(Path, len(p)+1) - copy(res, p) - res[len(res)-1] = element - return res -} +func (p Path) WithPathItem(element PathItem) Path { + res := append(Path(nil), p...) + return append(res, element) +}
98-104
: RemoveLastItem shares backing array; document or copyReturning a subslice keeps the original backing array alive and shared. If immutability is desired, return a copied slice.
Option:
-func (p Path) RemoveLastItem() Path { - if len(p) == 0 { - return p - } - return p[:len(p)-1] -} +func (p Path) RemoveLastItem() Path { + if len(p) == 0 { + return p + } + out := make(Path, len(p)-1) + copy(out, p[:len(p)-1]) + return out +}If sharing is intentional, add a short comment to prevent misuse.
v2/pkg/engine/datasource/grpc_datasource/execution_plan_test.go (1)
53-63
: buildPath works; consider minor robustness and API usage
- Ignore empty elements (e.g., accidental leading/trailing dots).
- Optionally construct via ast.Path.WithFieldNameItem for encapsulation.
Example:
-func buildPath(path string) ast.Path { - pathElements := strings.Split(path, ".") - pathItems := make([]ast.PathItem, 0, len(pathElements)) - for _, element := range pathElements { - pathItems = append(pathItems, ast.PathItem{ - Kind: ast.FieldName, - FieldName: []byte(element), - }) - } - return pathItems -} +func buildPath(path string) ast.Path { + var p ast.Path + for _, part := range strings.Split(path, ".") { + if part == "" { + continue + } + p = p.WithFieldNameItem([]byte(part)) + } + return p +}v2/pkg/engine/datasource/grpc_datasource/execution_plan_visitor.go (2)
350-375
: Resolver dependency tracking via relatedCallID is brittle; prefer a stackIncrementing/decrementing a shared counter can misattribute dependencies with nested/sibling resolvers. Track caller IDs with a small stack (push on entering a resolver field, pop on leave) to avoid off-by-one under complex nesting.
Would you switch relatedCallID to a slice stack (e.g., []int), using last element as current callerRef? This removes manual ++/-- and clarifies nesting semantics.
383-390
: Avoid overloading fieldPath with '@' sentinel; carry list metadata explicitlyEncoding nesting via repeated '@' in names is opaque and fragile. Prefer a dedicated PathItem or a parallel stack carrying list nesting, or include nesting info in setResolvedField inputs.
At minimum, define a named constant for the sentinel and document the contract at usage sites.
v2/pkg/engine/datasource/grpc_datasource/execution_plan_federation_test.go (2)
868-1050
: Solid resolver plan test; consider adding an error-path caseThe resolver call shape (DependentCalls, ResponsePath, context and args) looks correct. Consider a test where resolver args are missing/mismatched to assert a meaningful planning error.
1052-1290
: Multi-entity with resolver: good ordering and dependenciesDependentCalls: []int{1} correctly ties the resolver to the product lookup. Nice. Add one more case with aliased _entities to ensure ResponsePath resolution is alias-aware.
v2/pkg/engine/datasource/grpc_datasource/execution_plan_field_resolvers_test.go (1)
15-162
: Reduce duplication with small builder helpersExpected plans are very verbose and brittle. Consider helper constructors (e.g., rpcCall(...), rpcField(...), path(...)) to cut repetition and ease future maintenance.
Also applies to: 164-394, 396-569, 571-692, 694-881
v2/pkg/engine/datasource/grpc_datasource/required_fields_visitor.go (1)
139-141
: Consider documenting/using fieldDefinitionRefsYou collect field definition refs but don’t use them here. If intentional for future use, add a short comment; otherwise consider removing to avoid drift.
v2/pkg/grpctest/cmd/mapping_helper/templates/grpctest_mapping.tmpl (1)
112-116
: Optional: mark helperCall t.Helper() in MustDefaultGRPCMapping for nicer failure locations in callers.
func MustDefaultGRPCMapping(t *testing.T) *grpcdatasource.GRPCMapping { - mapping := DefaultGRPCMapping() - return mapping + t.Helper() + return DefaultGRPCMapping() }v2/pkg/engine/datasource/grpc_datasource/execution_plan.go (1)
801-818
: Use walker pool for consistency and to reduce allocationsgetFieldsFromContext creates a new default walker per call; elsewhere you use WalkerFromPool()+Release(). Align here.
- walker := astvisitor.NewDefaultWalker() - - v := newRequiredFieldsVisitor(&walker, &RPCMessage{}, r) + walker := astvisitor.WalkerFromPool() + defer walker.Release() + v := newRequiredFieldsVisitor(walker, &RPCMessage{}, r)v2/pkg/grpctest/mapping/mapping.go (1)
1236-1240
: Helper ergonomics: mark as test helperMark MustDefaultGRPCMapping as a test helper to improve failure attribution.
func MustDefaultGRPCMapping(t *testing.T) *grpcdatasource.GRPCMapping { - mapping := DefaultGRPCMapping() - return mapping + t.Helper() + return DefaultGRPCMapping() }v2/pkg/grpctest/cmd/mapping_helper/main.go (2)
114-137
: Avoid shadowing ‘mapping’ identifierThe loop variable mapping shadows the outer JSONMapping, reducing readability and risking mistakes.
- for _, mapping := range mapping.ResolveMappings { - if mapping.Type != "LOOKUP_TYPE_RESOLVE" { + for _, rm := range mapping.ResolveMappings { + if rm.Type != "LOOKUP_TYPE_RESOLVE" { continue } - t := mapping.LookupMapping.Type + t := rm.LookupMapping.Type item, ok := data.ResolveRPCs[t] if !ok { item = []ResolveRPC{} } - item = append(item, ResolveRPC{ - LookupMapping: mapping.LookupMapping, - RPC: mapping.RPC, - Request: mapping.Request, - Response: mapping.Response, - }) + item = append(item, ResolveRPC{ + LookupMapping: rm.LookupMapping, + RPC: rm.RPC, + Request: rm.Request, + Response: rm.Response, + }) data.ResolveRPCs[t] = item }
108-113
: Stricter JSON parsing to catch mapping driftConsider DisallowUnknownFields to fail fast on unexpected keys in the input mapping.
- var mapping JSONMapping - err = json.Unmarshal(jsonBytes, &mapping) + var mapping JSONMapping + dec := json.NewDecoder(bytes.NewReader(jsonBytes)) + dec.DisallowUnknownFields() + err = dec.Decode(&mapping) if err != nil { log.Fatal(err) }v2/pkg/engine/datasource/grpc_datasource/grpc_datasource_test.go (2)
230-231
: Remove noisy prints from testsUse t.Logf if needed. stdout noise makes CI logs harder to parse.
- fmt.Println(output.String()) + // t.Log(output.String())- bytes := output.Bytes() - fmt.Println(string(bytes)) + out := output.Bytes() + // t.Log(string(out)) - err = json.Unmarshal(bytes, &resp) + err = json.Unmarshal(out, &resp)- pretty := new(bytes.Buffer) - err = json.Indent(pretty, output.Bytes(), "", " ") - require.NoError(t, err) - fmt.Println(pretty.String()) + // pretty := new(bytes.Buffer) + // _ = json.Indent(pretty, output.Bytes(), "", " ") + // t.Log(pretty.String())Also applies to: 316-319, 3935-3939
314-321
: Avoid shadowing the bytes package with a local var named bytesRenaming improves clarity and avoids accidental misuse.
- bytes := output.Bytes() - fmt.Println(string(bytes)) - err = json.Unmarshal(bytes, &resp) + out := output.Bytes() + fmt.Println(string(out)) + err = json.Unmarshal(out, &resp)v2/pkg/engine/datasource/grpc_datasource/grpc_datasource.go (2)
152-156
: Abort further chunks on RPC error (avoid cascading calls).After errGrp.Wait failure, returning nil lets TopologicalSortResolve continue with later chunks. Prefer aborting early.
- if err := errGrp.Wait(); err != nil { - out.Write(builder.writeErrorBytes(err)) - failed = true - return nil - } + if err := errGrp.Wait(); err != nil { + out.Write(builder.writeErrorBytes(err)) + return err + }
132-133
: Typo in comment.“variablese” → “variables”.
v2/pkg/engine/datasource/grpc_datasource/execution_plan_visitor_federation.go (1)
191-197
: Verify selection set slice bounds (off-by-one risk).Using lbrace := LBrace.CharEnd and rbrace := RBrace.CharStart - 1 may drop the last character. If RBrace.CharStart already points to '}', slicing [lbrace:rbrace] excludes the char before '}'. Double-check token semantics; adjust to [lbrace:RBrace.CharStart] if needed.
v2/pkg/engine/datasource/grpc_datasource/fetch.go (1)
95-106
: Optional: use a slice of levels instead of a map for chunks.Since levels are contiguous (0..max), a [][]FetchItem avoids map lookups and clarifies execution order.
v2/pkg/engine/datasource/grpc_datasource/compiler.go (1)
124-139
: Name-collision risk in node index (simple names as keys)
nodes
uses simple names as keys. Imports can bring in identically named types from different packages, causing collisions and wrong lookups. Prefer fully-qualified names (descriptorFullName
) as keys.Would you like me to prepare a follow-up refactor plan to key nodes by fully-qualified names and adjust lookups accordingly?
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (4)
v2/pkg/grpctest/productv1/product.pb.go
is excluded by!**/*.pb.go
v2/pkg/grpctest/productv1/product_grpc.pb.go
is excluded by!**/*.pb.go
v2/pkg/grpctest/productv1/testdata/service.pb.go
is excluded by!**/*.pb.go
v2/pkg/grpctest/productv1/testdata/service_grpc.pb.go
is excluded by!**/*.pb.go
📒 Files selected for processing (26)
v2/pkg/ast/path.go
(2 hunks)v2/pkg/engine/datasource/grpc_datasource/compiler.go
(12 hunks)v2/pkg/engine/datasource/grpc_datasource/compiler_test.go
(2 hunks)v2/pkg/engine/datasource/grpc_datasource/configuration.go
(7 hunks)v2/pkg/engine/datasource/grpc_datasource/execution_plan.go
(7 hunks)v2/pkg/engine/datasource/grpc_datasource/execution_plan_federation_test.go
(9 hunks)v2/pkg/engine/datasource/grpc_datasource/execution_plan_field_resolvers_test.go
(1 hunks)v2/pkg/engine/datasource/grpc_datasource/execution_plan_test.go
(2 hunks)v2/pkg/engine/datasource/grpc_datasource/execution_plan_visitor.go
(11 hunks)v2/pkg/engine/datasource/grpc_datasource/execution_plan_visitor_federation.go
(11 hunks)v2/pkg/engine/datasource/grpc_datasource/fetch.go
(1 hunks)v2/pkg/engine/datasource/grpc_datasource/fetch_test.go
(1 hunks)v2/pkg/engine/datasource/grpc_datasource/grpc_datasource.go
(3 hunks)v2/pkg/engine/datasource/grpc_datasource/grpc_datasource_test.go
(8 hunks)v2/pkg/engine/datasource/grpc_datasource/json_builder.go
(5 hunks)v2/pkg/engine/datasource/grpc_datasource/mapping_test_helper.go
(15 hunks)v2/pkg/engine/datasource/grpc_datasource/required_fields_visitor.go
(4 hunks)v2/pkg/grpctest/Makefile
(3 hunks)v2/pkg/grpctest/Readme.md
(1 hunks)v2/pkg/grpctest/cmd/mapping_helper/main.go
(1 hunks)v2/pkg/grpctest/cmd/mapping_helper/templates/grpctest_mapping.tmpl
(1 hunks)v2/pkg/grpctest/mapping/mapping.go
(15 hunks)v2/pkg/grpctest/mockservice.go
(11 hunks)v2/pkg/grpctest/product.proto
(6 hunks)v2/pkg/grpctest/schema.go
(2 hunks)v2/pkg/grpctest/testdata/products.graphqls
(4 hunks)
🧰 Additional context used
🧠 Learnings (3)
📚 Learning: 2025-08-25T14:44:39.420Z
Learnt from: Noroth
PR: wundergraph/graphql-go-tools#1283
File: v2/pkg/grpctest/product.proto:14-15
Timestamp: 2025-08-25T14:44:39.420Z
Learning: In the gRPC test mapping system in v2/pkg/grpctest/mapping/mapping.go, entity lookup RPCs (like LookupWarehouseById) belong in EntityRPCs mapping but should NOT be added to QueryRPCs. Entity RPCs and Query RPCs serve different purposes in the mapping architecture.
Applied to files:
v2/pkg/engine/datasource/grpc_datasource/execution_plan_federation_test.go
v2/pkg/engine/datasource/grpc_datasource/configuration.go
v2/pkg/engine/datasource/grpc_datasource/mapping_test_helper.go
v2/pkg/grpctest/mapping/mapping.go
📚 Learning: 2025-08-29T09:35:47.969Z
Learnt from: ysmolski
PR: wundergraph/graphql-go-tools#1282
File: v2/pkg/engine/plan/visitor.go:5-5
Timestamp: 2025-08-29T09:35:47.969Z
Learning: The wundergraph/graphql-go-tools project does not support Go versions < 1.23, so compatibility concerns for features available in Go 1.21+ (like cmp.Or) should not be raised.
Applied to files:
v2/pkg/engine/datasource/grpc_datasource/execution_plan_test.go
📚 Learning: 2025-08-08T09:43:07.433Z
Learnt from: Noroth
PR: wundergraph/graphql-go-tools#1262
File: v2/pkg/engine/datasource/grpc_datasource/json_builder.go:0-0
Timestamp: 2025-08-08T09:43:07.433Z
Learning: In v2/pkg/engine/datasource/grpc_datasource/json_builder.go, mergeEntities intentionally uses the loop index when calling indexMap.getResultIndex because the index map is type-aware, making per-type counters unnecessary under the current assumptions. Avoid suggesting per-type ordinal counters for this path in future reviews.
Applied to files:
v2/pkg/engine/datasource/grpc_datasource/json_builder.go
🪛 checkmake (0.2.2)
v2/pkg/grpctest/Makefile
[warning] 45-45: Missing required phony target "all"
(minphony)
[warning] 45-45: Missing required phony target "clean"
(minphony)
[warning] 45-45: Missing required phony target "test"
(minphony)
🪛 LanguageTool
v2/pkg/grpctest/Readme.md
[grammar] ~9-~9: There might be a mistake here.
Context: ...e with various data types and operations - GraphQL Schema: A comprehensive schema...
(QB_NEW_EN)
[grammar] ~10-~10: There might be a mistake here.
Context: ...ns, interfaces, and complex nested types - Protocol Buffers: Complete .proto
de...
(QB_NEW_EN)
[grammar] ~11-~11: There might be a mistake here.
Context: ...roto` definitions with generated Go code - Field Mappings: Detailed mappings betw...
(QB_NEW_EN)
[grammar] ~12-~12: There might be a mistake here.
Context: ...n GraphQL fields and gRPC message fields - Test Utilities: Helper functions for s...
(QB_NEW_EN)
[grammar] ~21-~21: There might be a mistake here.
Context: ...e, and Warehouse entity resolution by ID - Query Operations: Various query types ...
(QB_NEW_EN)
[grammar] ~22-~22: There might be a mistake here.
Context: ..., pagination, and complex nested queries - Mutation Operations: Create and update...
(QB_NEW_EN)
[grammar] ~23-~23: There might be a mistake here.
Context: ...te operations for different entity types - Field Resolvers: Custom field resoluti...
(QB_NEW_EN)
[grammar] ~24-~24: There might be a mistake here.
Context: ... productCount
, popularityScore
, etc. - Union Types: Support for GraphQL union...
(QB_NEW_EN)
[grammar] ~25-~25: There might be a mistake here.
Context: ...SearchResult,
ActionResult,
Animal`) - Nullable Fields: Comprehensive testing...
(QB_NEW_EN)
[grammar] ~26-~26: There might be a mistake here.
Context: ...ting of optional/nullable field handling - Complex Lists: Nested list structures ...
(QB_NEW_EN)
[grammar] ~33-~33: There might be a mistake here.
Context: ...sing GraphQL schemas from embedded files - Schema validation and transformation - P...
(QB_NEW_EN)
[grammar] ~34-~34: There might be a mistake here.
Context: ...s - Schema validation and transformation - Protocol buffer schema access - Test-fri...
(QB_NEW_EN)
[grammar] ~35-~35: There might be a mistake here.
Context: ...ormation - Protocol buffer schema access - Test-friendly schema loading functions ...
(QB_NEW_EN)
[grammar] ~42-~42: There might be a mistake here.
Context: ...aphQL operations to gRPC service methods - Field Mappings: Maps GraphQL field nam...
(QB_NEW_EN)
[grammar] ~43-~43: There might be a mistake here.
Context: ...phQL field names to protobuf field names - Argument Mappings: Maps GraphQL argume...
(QB_NEW_EN)
[grammar] ~44-~44: There might be a mistake here.
Context: ...hQL arguments to gRPC request parameters - Entity Lookups: Configuration for enti...
(QB_NEW_EN)
[grammar] ~45-~45: There might be a mistake here.
Context: ...iguration for entity resolution patterns - Enum Mappings: GraphQL enum to protobu...
(QB_NEW_EN)
[grammar] ~90-~90: There might be a mistake here.
Context: ...rvice: 1. Update the GraphQL schema in testdata/products.graphqls
2. Regenerate the Protocol Buffer definitio...
(QB_NEW_EN)
[grammar] ~91-~91: There might be a mistake here.
Context: ...e the Protocol Buffer definitions using make regenerate-proto
3. Update the mock service implementation i...
(QB_NEW_EN)
[grammar] ~92-~92: There might be a mistake here.
Context: ...date the mock service implementation in mockservice.go
4. Add corresponding field mappings in `map...
(QB_NEW_EN)
[grammar] ~93-~93: There might be a mistake here.
Context: ... 4. Add corresponding field mappings in mapping/mapping.go
5. Update field configurations in `schema.g...
(QB_NEW_EN)
[grammar] ~100-~100: There might be a mistake here.
Context: ...coverage of: - Request/response mapping - Field resolution patterns - Error handli...
(QB_NEW_EN)
[grammar] ~101-~101: There might be a mistake here.
Context: ...onse mapping - Field resolution patterns - Error handling scenarios - Performance c...
(QB_NEW_EN)
[grammar] ~102-~102: There might be a mistake here.
Context: ...tion patterns - Error handling scenarios - Performance characteristics - Edge cases...
(QB_NEW_EN)
[grammar] ~103-~103: There might be a mistake here.
Context: ... scenarios - Performance characteristics - Edge cases and boundary conditions The ...
(QB_NEW_EN)
🪛 markdownlint-cli2 (0.18.1)
v2/pkg/grpctest/Readme.md
68-68: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (2)
- GitHub Check: Build and test (go 1.25 / ubuntu-latest)
- GitHub Check: Build and test (go 1.25 / ubuntu-latest)
🔇 Additional comments (23)
v2/pkg/grpctest/product.proto (3)
58-63
: New resolver RPCs: ensure server stubs, registration, and tests are in placeLooks good. Please confirm server implementations, routing/registration, and tests exist for these methods.
4-4
: Verify go_package path matches module/repo layoutgo_package is "cosmo/pkg/proto/productv1;productv1" while this file lives under v2/pkg/grpctest. Ensure this is intentional and aligns with import paths used by generated Go code.
185-191
: ListOfSubcategory wrapper: consistent and clearWrapper mirrors existing pattern for list types. LGTM.
v2/pkg/grpctest/mockservice.go (1)
238-263
: Good helper extractioncreateSubcategories is clean, reusable, and keeps responses consistent across RPCs. LGTM.
v2/pkg/engine/datasource/grpc_datasource/execution_plan_visitor.go (1)
115-129
: Resolver calls appended at LeaveDocument: verify DependentCalls indicesAppending resolver calls after preallocated root calls changes indices. Ensure createResolverRPCCalls computes DependentCalls against final positions, especially with multiple root fields and mixed resolver depths.
Would you add a test with 2+ root fields each having nested resolvers to confirm DependentCalls point to the intended root call indices?
v2/pkg/engine/datasource/grpc_datasource/compiler_test.go (2)
288-295
: LGTM: tests aligned with ServiceCall APIRenaming to serviceCalls and accessing Input.ProtoReflect matches the new Compile contract.
403-408
: LGTM: asserting compiled input via ProtoReflectUsing ProtoReflect on serviceCalls[0].Input is appropriate.
v2/pkg/engine/datasource/grpc_datasource/execution_plan_federation_test.go (1)
55-58
: LGTM: CallKind annotations improve plan clarityMarking entity calls with CallKindEntity is consistent and helpful for downstream orchestration.
Also applies to: 140-143, 346-349, 570-573, 780-783
v2/pkg/grpctest/schema.go (2)
75-87
: Good: TB-based MustGraphQLSchema with validationNice switch to testing.TB and in-function validation. LGTM.
98-102
: Good: TB-based MustProtoSchemaTB usage and require.NoError are appropriate. LGTM.
v2/pkg/engine/datasource/grpc_datasource/json_builder.go (3)
20-24
: Good: constant for resolve result pathUsing a named constant for "result" avoids magic strings. LGTM.
192-221
: Good: arena-backed entity merge containerArena-backed object/array creation for _entities is fine; preserves indexMap ordering.
618-619
: Good: enum mapping via FindEnumValueMappingSwitching to mapping lookup aligns with config changes. LGTM.
Also applies to: 667-668
v2/pkg/engine/datasource/grpc_datasource/required_fields_visitor.go (1)
52-66
: Good: generalized visitWithMemberTypesRefactor enables supplying explicit member types. Clean delegation from visitRequiredFields. LGTM.
v2/pkg/grpctest/cmd/mapping_helper/templates/grpctest_mapping.tmpl (1)
9-109
: Template structure LGTMTemplate builds RPC maps, resolvers, entities, enums, and fields consistent with new types. Looks good.
v2/pkg/grpctest/mapping/mapping.go (1)
302-333
: Entity RPCs correctly scoped to EntityRPCs (not QueryRPCs)Placement of Lookup* RPCs only under EntityRPCs matches the intended architecture. LGTM.
Based on learnings
v2/pkg/engine/datasource/grpc_datasource/grpc_datasource_test.go (2)
272-291
: Type of ArgumentMappings: ensure consistencySome places use map[string]string; others use FieldArgumentMap. If FieldArgumentMap is a distinct type (not an alias), these literals will not compile.
Please confirm FieldArgumentMap’s definition. If it’s a distinct type, wrap literals with FieldArgumentMap{...} for consistency.
Also applies to: 355-383, 568-584, 741-750, 947-956, 1193-1203, 1539-1549
87-101
: Benchmarks: replace b.Loop() with standard for-loopSame issue in the second benchmark.
- for b.Loop() { + for i := 0; i < b.N; i++ { ds, err := NewDataSource(conn, DataSourceConfig{ Operation: &queryDoc, Definition: &schemaDoc, SubgraphName: "Products", Compiler: compiler, Mapping: testMapping(), }) require.NoError(b, err) output := new(bytes.Buffer) err = ds.Load(context.Background(), []byte(`{"query":"`+query+`","body":`+variables+`}`), output) require.NoError(b, err) }Likely an incorrect or invalid review comment.
v2/pkg/engine/datasource/grpc_datasource/execution_plan.go (1)
839-857
: No changes needed: go.mod already sets Go 1.25, satisfying slices.Collect requirementv2/pkg/grpctest/testdata/products.graphqls (1)
1-3
: Schema updates for field resolvers look good.Directive, scalar, and resolver-decorated fields are consistent and self-contained.
Also applies to: 120-140, 126-133, 341-367, 454-456
v2/pkg/engine/datasource/grpc_datasource/configuration.go (1)
9-14
: Typed mappings and Find APIs are a solid improvement.*The generic RPCConfigMap and resolver-specific Find* helpers simplify usage and reduce errors.
Also applies to: 19-20, 35-37, 89-99, 100-118, 119-138, 139-162, 163-179, 180-198
v2/pkg/engine/datasource/grpc_datasource/mapping_test_helper.go (2)
295-326
: Entity RPCs correctly separated from Query RPCsEntity lookups are under
EntityRPCs
only; none duplicated inQueryRPCs
. LGTM.Based on learnings
221-294
: ResolveRPCs and Field mappings look consistentTyped maps (
RPCConfigMap[...]
,FieldArgumentMap
) and resolver entries appear coherent. No blockers spotted in this helper.If helpful, I can generate a quick script to check that every RPC name here exists in the proto test fixtures and that every
TargetName
appears in the corresponding proto messages.Also applies to: 340-1225
response, err := builder.marshalResponseJSON(&a, &serviceCall.RPC.Response, serviceCall.Output) | ||
if err != nil { | ||
return nil | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Don’t swallow marshal errors.
Returning nil on marshal failure hides the error and may produce invalid merges. Return the error.
- response, err := builder.marshalResponseJSON(&a, &serviceCall.RPC.Response, serviceCall.Output)
- if err != nil {
- return nil
- }
+ response, err := builder.marshalResponseJSON(&a, &serviceCall.RPC.Response, serviceCall.Output)
+ if err != nil {
+ return err
+ }
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
response, err := builder.marshalResponseJSON(&a, &serviceCall.RPC.Response, serviceCall.Output) | |
if err != nil { | |
return nil | |
} | |
response, err := builder.marshalResponseJSON(&a, &serviceCall.RPC.Response, serviceCall.Output) | |
if err != nil { | |
return err | |
} |
🤖 Prompt for AI Agents
In v2/pkg/engine/datasource/grpc_datasource/grpc_datasource.go around lines
126-129, the code swallows marshal errors by returning nil on failure; change
the error path to return the error instead (e.g., return nil, err) and
optionally wrap it with contextual information (fmt.Errorf or errors.Wrap) so
callers receive the marshal failure instead of a nil response.
func (j *jsonBuilder) mergeWithPath(base *astjson.Value, resolved *astjson.Value, path ast.Path) error { | ||
resolvedValues := resolved.GetArray(resolveResponsePath) | ||
|
||
searchPath := path[:len(path)-1] | ||
elementName := path[len(path)-1].FieldName.String() | ||
|
||
responseValues := make([]*astjson.Value, 0, len(resolvedValues)) | ||
|
||
current := base | ||
current = current.Get(searchPath[0].FieldName.String()) | ||
switch current.Type() { | ||
case astjson.TypeArray: | ||
arr := current.GetArray() | ||
values, err := j.flattenList(arr, searchPath[1:]) | ||
if err != nil { | ||
return err | ||
} | ||
responseValues = append(responseValues, values...) | ||
default: | ||
values, err := j.flattenObject(current, searchPath[1:]) | ||
if err != nil { | ||
return err | ||
} | ||
responseValues = append(responseValues, values...) | ||
} | ||
|
||
if len(resolvedValues) < len(responseValues) { | ||
return fmt.Errorf("length of values doesn't match the length of the result array, expected %d, got %d", len(responseValues), len(resolvedValues)) | ||
} | ||
|
||
for i := range responseValues { | ||
responseValues[i].Set(elementName, resolvedValues[i].Get(elementName)) | ||
} | ||
|
||
return nil | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix mergeWithPath: invalid GetArray call, missing root-path guard, weak length check
resolved.GetArray(resolveResponsePath)
is invalid; useresolved.Get(...).Array()
.- Guard empty path and root-level merges.
- Enforce 1:1 length match to avoid silent truncation.
-func (j *jsonBuilder) mergeWithPath(base *astjson.Value, resolved *astjson.Value, path ast.Path) error {
- resolvedValues := resolved.GetArray(resolveResponsePath)
-
- searchPath := path[:len(path)-1]
- elementName := path[len(path)-1].FieldName.String()
-
- responseValues := make([]*astjson.Value, 0, len(resolvedValues))
-
- current := base
- current = current.Get(searchPath[0].FieldName.String())
- switch current.Type() {
- case astjson.TypeArray:
- arr := current.GetArray()
- values, err := j.flattenList(arr, searchPath[1:])
- if err != nil {
- return err
- }
- responseValues = append(responseValues, values...)
- default:
- values, err := j.flattenObject(current, searchPath[1:])
- if err != nil {
- return err
- }
- responseValues = append(responseValues, values...)
- }
-
- if len(resolvedValues) < len(responseValues) {
- return fmt.Errorf("length of values doesn't match the length of the result array, expected %d, got %d", len(responseValues), len(resolvedValues))
- }
-
- for i := range responseValues {
- responseValues[i].Set(elementName, resolvedValues[i].Get(elementName))
- }
-
- return nil
-}
+func (j *jsonBuilder) mergeWithPath(base *astjson.Value, resolved *astjson.Value, path ast.Path) error {
+ if path.Len() == 0 {
+ return fmt.Errorf("empty resolve path")
+ }
+ // Resolve results
+ rv, err := resolved.Get(resolveResponsePath).Array()
+ if err != nil {
+ return fmt.Errorf("resolve response: expected array at %q: %w", resolveResponsePath, err)
+ }
+ elementName := path[path.Len()-1].FieldName.String()
+
+ // Collect target objects to patch
+ var responseValues []*astjson.Value
+ searchPath := path[:path.Len()-1]
+ if len(searchPath) == 0 {
+ responseValues = []*astjson.Value{base}
+ } else {
+ current := base.Get(searchPath[0].FieldName.String())
+ switch current.Type() {
+ case astjson.TypeArray:
+ values, err := j.flattenList(current.GetArray(), searchPath[1:])
+ if err != nil {
+ return err
+ }
+ responseValues = append(responseValues, values...)
+ case astjson.TypeObject:
+ values, err := j.flattenObject(current, searchPath[1:])
+ if err != nil {
+ return err
+ }
+ responseValues = append(responseValues, values...)
+ default:
+ return fmt.Errorf("expected array or object at path segment, got %s", current.Type())
+ }
+ }
+
+ if len(rv) != len(responseValues) {
+ return fmt.Errorf("resolve result length mismatch: expected %d, got %d", len(responseValues), len(rv))
+ }
+ for i := range responseValues {
+ responseValues[i].Set(elementName, rv[i].Get(elementName))
+ }
+ return nil
+}
Note: Keeping index mapping as-is in merge logic is intentional. Based on learnings
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
func (j *jsonBuilder) mergeWithPath(base *astjson.Value, resolved *astjson.Value, path ast.Path) error { | |
resolvedValues := resolved.GetArray(resolveResponsePath) | |
searchPath := path[:len(path)-1] | |
elementName := path[len(path)-1].FieldName.String() | |
responseValues := make([]*astjson.Value, 0, len(resolvedValues)) | |
current := base | |
current = current.Get(searchPath[0].FieldName.String()) | |
switch current.Type() { | |
case astjson.TypeArray: | |
arr := current.GetArray() | |
values, err := j.flattenList(arr, searchPath[1:]) | |
if err != nil { | |
return err | |
} | |
responseValues = append(responseValues, values...) | |
default: | |
values, err := j.flattenObject(current, searchPath[1:]) | |
if err != nil { | |
return err | |
} | |
responseValues = append(responseValues, values...) | |
} | |
if len(resolvedValues) < len(responseValues) { | |
return fmt.Errorf("length of values doesn't match the length of the result array, expected %d, got %d", len(responseValues), len(resolvedValues)) | |
} | |
for i := range responseValues { | |
responseValues[i].Set(elementName, resolvedValues[i].Get(elementName)) | |
} | |
return nil | |
} | |
func (j *jsonBuilder) mergeWithPath(base *astjson.Value, resolved *astjson.Value, path ast.Path) error { | |
if path.Len() == 0 { | |
return fmt.Errorf("empty resolve path") | |
} | |
// Resolve results | |
rv, err := resolved.Get(resolveResponsePath).Array() | |
if err != nil { | |
return fmt.Errorf("resolve response: expected array at %q: %w", resolveResponsePath, err) | |
} | |
elementName := path[path.Len()-1].FieldName.String() | |
// Collect target objects to patch | |
var responseValues []*astjson.Value | |
searchPath := path[:path.Len()-1] | |
if len(searchPath) == 0 { | |
responseValues = []*astjson.Value{base} | |
} else { | |
current := base.Get(searchPath[0].FieldName.String()) | |
switch current.Type() { | |
case astjson.TypeArray: | |
values, err := j.flattenList(current.GetArray(), searchPath[1:]) | |
if err != nil { | |
return err | |
} | |
responseValues = append(responseValues, values...) | |
case astjson.TypeObject: | |
values, err := j.flattenObject(current, searchPath[1:]) | |
if err != nil { | |
return err | |
} | |
responseValues = append(responseValues, values...) | |
default: | |
return fmt.Errorf("expected array or object at path segment, got %s", current.Type()) | |
} | |
} | |
if len(rv) != len(responseValues) { | |
return fmt.Errorf("resolve result length mismatch: expected %d, got %d", len(responseValues), len(rv)) | |
} | |
for i := range responseValues { | |
responseValues[i].Set(elementName, rv[i].Get(elementName)) | |
} | |
return nil | |
} |
🤖 Prompt for AI Agents
v2/pkg/engine/datasource/grpc_datasource/json_builder.go lines 223-258: the code
wrongly calls resolved.GetArray(resolveResponsePath), fails to guard against
empty/root paths and only checks that resolvedValues is not shorter than
responseValues (allowing silent truncation); fix by first validating len(path)
>= 1 and handle root-level merge when searchPath is empty (don't index into
searchPath[0], use base directly), retrieve the resolved array via the proper
API (e.g., resolved.Get(resolveResponsePath).Array() or
resolved.Get(resolveResponsePath).GetArray() depending on the astjson API)
instead of GetArray(resolveResponsePath), and enforce a strict 1:1 length
match—if len(resolvedValues) != len(responseValues) return an error—before
mapping elementName into each responseValues[i].
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No you're wrong here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@Noroth, thank you for the feedback. Could you help me understand which part of my analysis is incorrect? I'd like to learn more about:
- Is
GetArray(resolveResponsePath)
a valid API pattern in astjson that I missed? - Are there guarantees from the calling context that make the path validation unnecessary?
- Is the length check intentionally allowing more resolved values than response values for a specific reason?
Understanding the specific issue will help me provide better feedback in the future.
🧠 Learnings used
Learnt from: Noroth
PR: wundergraph/graphql-go-tools#1262
File: v2/pkg/engine/datasource/grpc_datasource/json_builder.go:0-0
Timestamp: 2025-08-08T09:43:07.433Z
Learning: In v2/pkg/engine/datasource/grpc_datasource/json_builder.go, mergeEntities intentionally uses the loop index when calling indexMap.getResultIndex because the index map is type-aware, making per-type counters unnecessary under the current assumptions. Avoid suggesting per-type ordinal counters for this path in future reviews.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
resolved.GetArray(resolveResponsePath) is valid.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@Noroth, thank you for the clarification—I was incorrect about the GetArray
API.
Could you also confirm whether my other two concerns are valid or if I've misunderstood those as well:
-
Empty path guard: Should the function validate that
path.Len() > 0
to prevent panics when accessingpath[len(path)-1]
? -
Length check: The current check
len(resolvedValues) < len(responseValues)
allows more resolved values than response values. Should this be a strict equality check (len(resolvedValues) != len(responseValues)
) to catch mismatches in both directions?
This will help me understand if there are caller guarantees that make these checks unnecessary.
🧠 Learnings used
Learnt from: Noroth
PR: wundergraph/graphql-go-tools#1262
File: v2/pkg/engine/datasource/grpc_datasource/json_builder.go:0-0
Timestamp: 2025-08-08T09:43:07.433Z
Learning: In v2/pkg/engine/datasource/grpc_datasource/json_builder.go, mergeEntities intentionally uses the loop index when calling indexMap.getResultIndex because the index map is type-aware, making per-type counters unnecessary under the current assumptions. Avoid suggesting per-type ordinal counters for this path in future reviews.
v2/pkg/grpctest/Makefile
Outdated
protoc --go_out=productv1 --go_opt=paths=source_relative \ | ||
--go-grpc_out=productv1 --go-grpc_opt=paths=source_relative \ | ||
product.proto | ||
mv $(mkfile_dir)/testdata/service.proto $(mkfile_dir)/product.proto | ||
rm -f $(mkfile_dir)/testdata/service.proto.lock.json |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Critical: protoc runs before moving the generated proto (fresh clones fail; artifacts out of sync).
You compile product.proto, then overwrite it by moving service.proto after. On a clean repo product.proto may not exist; and even if it does, you’ll emit code for the old proto.
Prefer compiling the generated service.proto directly and drop the mv.
- protoc --go_out=productv1 --go_opt=paths=source_relative \
- --go-grpc_out=productv1 --go-grpc_opt=paths=source_relative \
- product.proto
- mv $(mkfile_dir)/testdata/service.proto $(mkfile_dir)/product.proto
- rm -f $(mkfile_dir)/testdata/service.proto.lock.json
+ protoc -I $(mkfile_dir)/testdata \
+ --go_out=$(mkfile_dir)/productv1 --go_opt=paths=source_relative \
+ --go-grpc_out=$(mkfile_dir)/productv1 --go-grpc_opt=paths=source_relative \
+ $(mkfile_dir)/testdata/service.proto
+ rm -f $(mkfile_dir)/testdata/service.proto.lock.json
Alternative (if you insist on renaming): move first, then protoc; still recommend the direct-compile approach above.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
protoc --go_out=productv1 --go_opt=paths=source_relative \ | |
--go-grpc_out=productv1 --go-grpc_opt=paths=source_relative \ | |
product.proto | |
mv $(mkfile_dir)/testdata/service.proto $(mkfile_dir)/product.proto | |
rm -f $(mkfile_dir)/testdata/service.proto.lock.json | |
# Compile directly from testdata/service.proto; drop the mv | |
protoc -I $(mkfile_dir)/testdata \ | |
--go_out=$(mkfile_dir)/productv1 --go_opt=paths=source_relative \ | |
--go-grpc_out=$(mkfile_dir)/productv1 --go-grpc_opt=paths=source_relative \ | |
$(mkfile_dir)/testdata/service.proto | |
rm -f $(mkfile_dir)/testdata/service.proto.lock.json |
🤖 Prompt for AI Agents
In v2/pkg/grpctest/Makefile around lines 10 to 14, the Makefile runs protoc on
product.proto and then moves testdata/service.proto over product.proto, which
causes fresh clones to fail or generated code to be out of sync; instead update
the Makefile to run protoc directly against the testdata/service.proto (or move
service.proto to product.proto before invoking protoc if renaming is required),
and remove the post-protoc mv and related rm so the generator always compiles
the actual source file.
Holy cheese! Even though it is big, most of it is autogen. I will try to review it, just let me know when you are done with coderabbitai |
…or-graphql-operation
…or-graphql-operation
@coderabbitai review |
✅ Actions performedReview triggered.
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
♻️ Duplicate comments (1)
v2/pkg/grpctest/Makefile (1)
10-12
: Critical issue from previous review is unresolved:generate-proto
will fail on fresh clones.The previous reviewer flagged that protoc runs on
product.proto
before the file is generated and moved into place. This exact issue persists:product.proto
does not exist on a fresh clone, causing thegenerate-proto
target to fail. Additionally,generate-proto
is a dependency ofgenerate-all
(line 48), so end-to-end generation on fresh clones will break.Additionally, lines 10-12 lack the
$(mkfile_dir)
prefix and proper include paths (-I testdata
), creating path inconsistency compared to the new targets below.Apply this diff to compile directly against the generated source file and fix the path inconsistency:
.PHONY: generate-proto generate-proto: install-protoc - protoc --go_out=productv1 --go_opt=paths=source_relative \ - --go-grpc_out=productv1 --go-grpc_opt=paths=source_relative \ - product.proto + protoc -I $(mkfile_dir)/testdata \ + --go_out=$(mkfile_dir)/productv1 --go_opt=paths=source_relative \ + --go-grpc_out=$(mkfile_dir)/productv1 --go-grpc_opt=paths=source_relative \ + $(mkfile_dir)/testdata/service.proto
🧹 Nitpick comments (2)
v2/pkg/engine/datasource/grpc_datasource/execution_plan_visitor.go (2)
114-128
: Consider re-slicing instead of nil assignment for resolvedFields.Setting
r.resolvedFields = nil
releases the underlying array memory, but also loses the allocated capacity. If this visitor instance is reused, considerr.resolvedFields = r.resolvedFields[:0]
to retain capacity and reduce allocations. However, if visitor instances are short-lived, the current approach is fine.Apply this diff if visitor instances are reused:
- r.resolvedFields = nil + r.resolvedFields = r.resolvedFields[:0]
388-394
: Clarify the @ prefix convention for nested lists.The use of
@
characters as a nesting level prefix for Protobuf field paths is non-standard. Ensure this convention is documented and understood by consumers of the field path. Consider whether this should be a well-known constant or configurable.Add a constant and documentation:
+// fieldPathNestingPrefix is prepended to field names to indicate list nesting depth in Protobuf field paths. +// For example, a doubly-nested list would have prefix "@@". +const fieldPathNestingPrefix = "@" + // If we have a nested or nullable list, we add a @ prefix to indicate the nesting level. prefix := "" if field.ListMetadata != nil { - prefix = strings.Repeat("@", field.ListMetadata.NestingLevel) + prefix = strings.Repeat(fieldPathNestingPrefix, field.ListMetadata.NestingLevel) }
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (3)
v2/pkg/engine/datasource/grpc_datasource/compiler.go
(12 hunks)v2/pkg/engine/datasource/grpc_datasource/execution_plan_visitor.go
(11 hunks)v2/pkg/grpctest/Makefile
(2 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
- v2/pkg/engine/datasource/grpc_datasource/compiler.go
🧰 Additional context used
🪛 checkmake (0.2.2)
v2/pkg/grpctest/Makefile
[warning] 47-47: Missing required phony target "all"
(minphony)
[warning] 47-47: Missing required phony target "clean"
(minphony)
[warning] 47-47: Missing required phony target "test"
(minphony)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
- GitHub Check: Build and test (go 1.25 / ubuntu-latest)
- GitHub Check: Build and test (go 1.25 / windows-latest)
- GitHub Check: Build and test (go 1.25 / ubuntu-latest)
🔇 Additional comments (9)
v2/pkg/grpctest/Makefile (2)
44-48
: Recommend adding standard phony targets for project hygiene.Static analysis flagged missing
all
,clean
, andtest
phony targets. Consider adding these standard Make conventions:+.PHONY: all +all: generate-all + +.PHONY: clean +clean: + rm -f product.proto + rm -f testdata/mapping.json + rm -rf productv1 + rm -rf mapping + +.PHONY: test +test: + go test ./... .PHONY: generate-all generate-all: generate-mapping-code generate-proto
18-22
: The review comment is incorrect and should be disregarded.The claimed bootstrap failure risk does not exist.
product.proto
is tracked in the repository (not generated or gitignored), so it will be present on fresh clones. Runninggenerate-proto
directly on a fresh clone will succeed—it will compile the existing committedproduct.proto
.The two flows serve distinct purposes and are not conflicting:
- generate-proto: compiles the existing committed
product.proto
- regenerate-proto: regenerates
product.proto
from the GraphQL schema- generate-all: orchestrates both (regenerate then compile)
The dependency chain (
generate-mapping-code
→regenerate-proto
→generate-all
→generate-proto
) is correctly structured.Likely an incorrect or invalid review comment.
v2/pkg/engine/datasource/grpc_datasource/execution_plan_visitor.go (7)
73-94
: LGTM: Visitor initialization is well-structured.The initialization of the new state fields is correct, using
ast.InvalidRef
as sentinel values and properly initializing collections. The registration of theDocumentVisitor
aligns with the newLeaveDocument
method.
160-163
: LGTM: TypeDefinitions depth check properly guards against panic.The depth check for
TypeDefinitions
stack has been correctly implemented, addressing the concern from the previous review. This prevents potential panics when the stack is too shallow.
297-325
: LGTM: Explicit isRootField parameter improves clarity.Passing
isRootField
as a parameter instead of checking walker state inside the method makes the control flow more explicit and easier to test. The initialization logic is correct.
411-437
: LGTM: Proper cleanup and state management in LeaveField.The field exit logic correctly maintains symmetry with
EnterField
: fieldPath is popped, resolvedFieldIndex is reset, and relatedCallID is decremented for fields with arguments. The state management is sound.
31-41
: contextField struct is actively used and not unused.The struct is utilized in
execution_plan.go
where it defines the type for thecontextFields
slice (line 722). It's actively populated (line 736) and iterated over (lines 927-940), with both fields (fieldRef
andresolvePath
) being accessed. The original concern about the struct being unused is incorrect.Both structs do lack documentation comments explaining their purpose and should be documented.
Likely an incorrect or invalid review comment.
358-380
: The review comment is based on an architectural misunderstanding.After examining
setResolvedField
, its callers, and related code:
Nested field arguments with parent-child dependencies are properly tracked via
responsePath
(constructed withWithFieldNameItem
) andparentTypeRef
in theresolvedField
struct.Circular dependency detection intentionally occurs in the execution phase (
fetch.go
, lines 46-54), not during planning. This is by design—setResolvedField
is part of the planning phase and correctly defers cycle detection to execution time via depth-first traversal withcycleChecks
.Resolution path construction correctly uses
WithFieldNameItem
to incrementally build paths, and therelatedCallID++
increment properly tracks nested call hierarchies.The implementation follows a sound two-phase architecture: planning builds the execution plan with proper dependency and path tracking, then execution validates and detects cycles. No changes are needed.
150-193
: Implementation correctly handles all argument types including lists.The
createRPCFieldFromFieldArgument
function properly supports scalar, enum, input object, and list argument types. ThebuildInputMessageField
call at line 600 preserves the full type reference (including List/NonNull wrappers), and line 682 explicitly checks for list types withr.definition.TypeIsList(typeRef)
. List types are correctly handled through eitherListMetadata
(for nullable or nested lists) or theRepeated
protobuf field (for non-nullable single lists). The comment at line 584 could be clarified to mention list support.
if r.resolvedFieldIndex != ast.InvalidRef { | ||
lbrace := r.operation.SelectionSets[ref].LBrace.CharEnd | ||
rbrace := r.operation.SelectionSets[ref].RBrace.CharStart - 1 | ||
|
||
r.resolvedFields[r.resolvedFieldIndex].requiredFields = unsafebytes.BytesToString(r.operation.Input.RawBytes[lbrace:rbrace]) | ||
r.walker.SkipNode() | ||
return | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add bounds check for resolvedFieldIndex before array access.
While resolvedFieldIndex != ast.InvalidRef
confirms it's set, there's no verification that it's within the bounds of resolvedFields
. If the index is out of bounds, this will panic.
Apply this diff to add bounds checking:
// If we are inside of a resolved field that selects multiple fields, we get all the fields from the input and pass them to the required fields visitor.
if r.resolvedFieldIndex != ast.InvalidRef {
+ if r.resolvedFieldIndex >= len(r.resolvedFields) {
+ r.walker.StopWithInternalErr(fmt.Errorf("internal: resolvedFieldIndex %d out of bounds for resolvedFields length %d", r.resolvedFieldIndex, len(r.resolvedFields)))
+ return
+ }
lbrace := r.operation.SelectionSets[ref].LBrace.CharEnd
rbrace := r.operation.SelectionSets[ref].RBrace.CharStart - 1
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
if r.resolvedFieldIndex != ast.InvalidRef { | |
lbrace := r.operation.SelectionSets[ref].LBrace.CharEnd | |
rbrace := r.operation.SelectionSets[ref].RBrace.CharStart - 1 | |
r.resolvedFields[r.resolvedFieldIndex].requiredFields = unsafebytes.BytesToString(r.operation.Input.RawBytes[lbrace:rbrace]) | |
r.walker.SkipNode() | |
return | |
} | |
if r.resolvedFieldIndex != ast.InvalidRef { | |
if r.resolvedFieldIndex >= len(r.resolvedFields) { | |
r.walker.StopWithInternalErr(fmt.Errorf("internal: resolvedFieldIndex %d out of bounds for resolvedFields length %d", r.resolvedFieldIndex, len(r.resolvedFields))) | |
return | |
} | |
lbrace := r.operation.SelectionSets[ref].LBrace.CharEnd | |
rbrace := r.operation.SelectionSets[ref].RBrace.CharStart - 1 | |
r.resolvedFields[r.resolvedFieldIndex].requiredFields = unsafebytes.BytesToString(r.operation.Input.RawBytes[lbrace:rbrace]) | |
r.walker.SkipNode() | |
return | |
} |
🤖 Prompt for AI Agents
In v2/pkg/engine/datasource/grpc_datasource/execution_plan_visitor.go around
lines 203 to 210, the code checks r.resolvedFieldIndex != ast.InvalidRef but
does not ensure the index is within the bounds of r.resolvedFields; add a bounds
check (0 <= r.resolvedFieldIndex && r.resolvedFieldIndex <
len(r.resolvedFields)) before accessing r.resolvedFields[r.resolvedFieldIndex],
and if the check fails, call r.walker.SkipNode() and return (optionally log or
silently ignore) to avoid a panic from an out-of-range index.
Summary by CodeRabbit
New Features
Improvements
Tests & Tooling
Checklist