N+1 Detection
The N+1 Problem in GraphQL
When a GraphQL query resolves a list of items and each item triggers a separate upstream request for related data, the total number of requests becomes 1 (for the list) + N (one per item). This is the N+1 problem, and it can severely degrade performance.
For example, fetching 50 users and then resolving each user’s posts individually results in 51 HTTP calls instead of 2.
Compile-Time Detection
GQLForge can detect potential N+1 issues before your server even starts. Run the check command with the --n-plus-one-queries flag:
gqlforge check --n-plus-one-queries config.graphql
This analyzes your schema and reports any fields that could produce N+1 request patterns. The output lists each problematic query path so you can address them before deployment.
Resolving N+1 with Batching
The primary solution is to use the batch_key argument on @http or @grpc directives. This tells GQLForge to group individual requests into a single batched call.
Example with @http
type User {
id: Int!
posts: [Post]
@http(
url: "https://api.example.com/posts"
query: [{ key: "user_id", value: "{{.value.id}}" }]
batch_key: ["user_id"]
)
}
With batch_key set, GQLForge collects all user_id values from the parent list and sends a single request like GET /posts?user_id=1&user_id=2&user_id=3 instead of making separate calls for each user.
Example with @grpc
type User {
id: Int!
profile: Profile
@grpc(
service: "ProfileService"
method: "GetProfiles"
body: "{{.value.id}}"
batch_key: ["id"]
)
}
The same batching concept applies to gRPC calls. GQLForge groups the individual IDs and sends them in a single RPC request.
Best Practices
- Run
check --n-plus-one-queriesin your CI pipeline to catch regressions early. - Always set
batch_keyon fields that resolve within a list context. - Design your upstream APIs to accept arrays of identifiers for batch lookups.