Skip to content
Draft
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
39 commits
Select commit Hold shift + click to select a range
e05e2d9
start with migration
enisdenjo May 21, 2025
bd94101
move to migration guides
enisdenjo May 22, 2025
4cc77f3
logger
enisdenjo May 22, 2025
f094319
sh npm2yarn
enisdenjo May 22, 2025
d1fb7d8
documentation and more migration
enisdenjo May 22, 2025
aa78b2f
more migration
enisdenjo May 22, 2025
f879b6d
no color
enisdenjo May 22, 2025
9afa2be
daily log writer
enisdenjo May 22, 2025
7a25cb6
also link
enisdenjo May 22, 2025
0274a16
logger writer flush
enisdenjo Jun 23, 2025
7955577
logtape writer
enisdenjo Jun 23, 2025
d62ad74
no forking
enisdenjo Jun 30, 2025
78177ed
no node 18 and some fixes
enisdenjo Jun 30, 2025
1047b1d
no multipart
enisdenjo Jun 30, 2025
5da6d57
no mocking
enisdenjo Jun 30, 2025
4265627
actual link
enisdenjo Jun 30, 2025
1fca8ca
docs(opentelemetry): Update documentation for Hive Gateway v2 (#6852)
EmrysMyrddin Jul 1, 2025
735d30e
docs(gateway): add migration for subgraph name of execution requests …
EmrysMyrddin Jul 17, 2025
c72d83a
cleanup
enisdenjo Jul 17, 2025
1f1301b
docs(opentelemetry): Add new CLI options documentation (#6877)
EmrysMyrddin Jul 17, 2025
67717b4
a bit better
enisdenjo Jul 17, 2025
aea19d8
typos and missing comma
enisdenjo Jul 17, 2025
9a9bbf7
root context
enisdenjo Jul 17, 2025
7021e97
new lines for visibility
enisdenjo Jul 17, 2025
4e8fea8
root context
enisdenjo Jul 17, 2025
120b3b0
supeRgraph
enisdenjo Jul 17, 2025
4782eef
example in example page, less migration
enisdenjo Jul 17, 2025
10f5fb5
match order like rest
enisdenjo Jul 17, 2025
ae28b47
not removed
enisdenjo Jul 17, 2025
d60d2bb
recommend hive access token and target
enisdenjo Jul 18, 2025
e198222
update for new context API
EmrysMyrddin Aug 21, 2025
d70df77
add configuration context documentation
EmrysMyrddin Aug 21, 2025
e1f7390
typo
enisdenjo Aug 27, 2025
6015a48
migrate
enisdenjo Aug 27, 2025
5890cc4
redis pubsub
enisdenjo Aug 27, 2025
efc77b3
see example
enisdenjo Aug 27, 2025
e71f345
edfs
enisdenjo Aug 28, 2025
6e22581
format
enisdenjo Aug 28, 2025
5c82e9e
no source name soon
enisdenjo Aug 29, 2025
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions packages/web/docs/src/content/_meta.ts
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,7 @@ export default {
'high-availability-cdn': 'High-Availability CDN',
dashboard: 'Dashboard',
gateway: 'Gateway',
logger: 'Logger',
management: 'Management',
'other-integrations': 'Other Integrations',
'api-reference': 'CLI/API Reference',
Expand Down
123 changes: 91 additions & 32 deletions packages/web/docs/src/content/api-reference/gateway-cli.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -19,51 +19,105 @@ hive-gateway --help

which will print out the following:

{/* IMPORTANT: please dont forget to update the following when arguments change. simply run `node --import tsx packages/hive-gateway/src/bin.ts --help` and copy over the text */}
{/* IMPORTANT: please dont forget to update the following when arguments change. simply run `node --import tsx packages/gateway/src/bin.ts --help` and copy over the text */}

```
Usage: hive-gateway [options] [command]

Federated GraphQL Gateway
Unify and accelerate your data graph across diverse services with Hive Gateway, which seamlessly
integrates with Apollo Federation.

Options:
--fork <count> count of workers to spawn. uses "24" (available parallelism) workers when NODE_ENV is "production",
otherwise "1" (the main) worker (default: 1) (env: FORK)
-c, --config-path <path> path to the configuration file. defaults to the following files respectively in the current working
directory: gateway.ts, gateway.mts, gateway.cts, gateway.js, gateway.mjs, gateway.cjs (env:
CONFIG_PATH)
--fork <number> number of workers to spawn. (default: 1) (env:
FORK)
-c, --config-path <path> path to the configuration file. defaults to
the following files respectively in the
current working directory: gateway.ts,
gateway.mts, gateway.cts, gateway.js,
gateway.mjs, gateway.cjs (env: CONFIG_PATH)
-h, --host <hostname> host to use for serving (default: 0.0.0.0)
-p, --port <number> port to use for serving (default: 4000) (env: PORT)
--polling <duration> schema polling interval in human readable duration (default: 10s) (env: POLLING)
-p, --port <number> port to use for serving (default: 4000) (env:
PORT)
--polling <duration> schema polling interval in human readable
duration (default: 10s) (env: POLLING)
--no-masked-errors don't mask unexpected errors in responses
--masked-errors mask unexpected errors in responses (default: true)
--hive-usage-target <target> Hive registry target to which the usage data should be reported to. requires the
"--hive-usage-access-token <token>" option (env: HIVE_USAGE_TARGET)
--hive-usage-access-token <token> Hive registry access token for usage metrics reporting. requires the "--hive-usage-target <target>"
option (env: HIVE_USAGE_ACCESS_TOKEN)
--hive-persisted-documents-endpoint <endpoint> [EXPERIMENTAL] Hive CDN endpoint for fetching the persisted documents. requires the
"--hive-persisted-documents-token <token>" option
--hive-persisted-documents-token <token> [EXPERIMENTAL] Hive persisted documents CDN endpoint token. requires the
"--hive-persisted-documents-endpoint <endpoint>" option
--hive-cdn-endpoint <endpoint> Hive CDN endpoint for fetching the schema (env: HIVE_CDN_ENDPOINT)
--hive-cdn-key <key> Hive CDN API key for fetching the schema. implies that the "schemaPathOrUrl" argument is a url (env:
HIVE_CDN_KEY)
--apollo-graph-ref <graphRef> Apollo graph ref of the managed federation graph (<YOUR_GRAPH_ID>@<VARIANT>) (env: APOLLO_GRAPH_REF)
--apollo-key <apiKey> Apollo API key to use to authenticate with the managed federation up link (env: APOLLO_KEY)
--masked-errors mask unexpected errors in responses (default:
true)
--opentelemetry [exporter-endpoint] Enable OpenTelemetry integration with an
exporter using this option's value as
endpoint. By default, it uses OTLP HTTP, use
"--opentelemetry-exporter-type" to change the
default. (env: OPENTELEMETRY)
--opentelemetry-exporter-type <type> OpenTelemetry exporter type to use when
setting up OpenTelemetry integration. Requires
"--opentelemetry" to set the endpoint.
(choices: "otlp-http", "otlp-grpc", default:
"otlp-http", env: OPENTELEMETRY_EXPORTER_TYPE)
--hive-registry-token <token> [DEPRECATED] please use "--hive-target" and
"--hive-access-token" (env:
HIVE_REGISTRY_TOKEN)
--hive-usage-target <target> [DEPRECATED] please use --hive-target instead.
(env: HIVE_USAGE_TARGET)
--hive-target <target> Hive registry target to which the usage and
tracing data should be reported to. Requires
either "--hive-access-token <token>",
"--hive-usage-access-token <token>" or
"--hive-trace-access-token" option (env:
HIVE_TARGET)
--hive-access-token <token> Hive registry access token for usage metrics
reporting and tracing. Enables both usage
reporting and tracing. Requires the
"--hive-target <target>" option (env:
HIVE_ACCESS_TOKEN)
--hive-usage-access-token <token> Hive registry access token for usage
reporting. Enables Hive usage report. Requires
the "--hive-target <target>" option. It can't
be used together with "--hive-access-token"
(env: HIVE_USAGE_ACCESS_TOKEN)
--hive-trace-access-token <token> Hive registry access token for tracing.
Enables Hive tracing. Requires the
"--hive-target <target>" option. It can't be
used together with "--hive-access-token" (env:
HIVE_TRACE_ACCESS_TOKEN)
--hive-trace-endpoint <endpoint> Hive registry tracing endpoint. (default:
"https://api.graphql-hive.com/otel/v1/traces",
env: HIVE_TRACE_ENDPOINT)
--hive-persisted-documents-endpoint <endpoint> [EXPERIMENTAL] Hive CDN endpoint for fetching
the persisted documents. Requires the
"--hive-persisted-documents-token <token>"
option
--hive-persisted-documents-token <token> [EXPERIMENTAL] Hive persisted documents CDN
endpoint token. Requires the
"--hive-persisted-documents-endpoint
<endpoint>" option
--hive-cdn-endpoint <endpoint> Hive CDN endpoint for fetching the schema
(env: HIVE_CDN_ENDPOINT)
--hive-cdn-key <key> Hive CDN API key for fetching the schema.
implies that the "schemaPathOrUrl" argument is
a url (env: HIVE_CDN_KEY)
--apollo-graph-ref <graphRef> Apollo graph ref of the managed federation
graph (<YOUR_GRAPH_ID>@<VARIANT>) (env:
APOLLO_GRAPH_REF)
--apollo-key <apiKey> Apollo API key to use to authenticate with the
managed federation up link (env: APOLLO_KEY)
--disable-websockets Disable WebSockets support
--jit Enable Just-In-Time compilation of GraphQL documents (env: JIT)
--jit Enable Just-In-Time compilation of GraphQL
documents (env: JIT) (env: JIT)
-V, --version output the version number
--help display help for command

Commands:
supergraph [options] [schemaPathOrUrl] serve a Federation supergraph provided by a compliant composition tool such as Mesh Compose or Apollo
Rover
subgraph [schemaPathOrUrl] serve a Federation subgraph that can be used with any Federation compatible router like Apollo
Router/Gateway
proxy [options] [endpoint] serve a proxy to a GraphQL API and add additional features such as monitoring/tracing, caching, rate
limiting, security, and more
supergraph [options] [schemaPathOrUrl] serve a Federation supergraph provided by a
compliant composition tool such as Mesh
Compose or Apollo Rover
subgraph [schemaPathOrUrl] serve a Federation subgraph that can be used
with any Federation compatible router like
Apollo Router/Gateway
proxy [options] [endpoint] serve a proxy to a GraphQL API and add
additional features such as
monitoring/tracing, caching, rate limiting,
security, and more
help [command] display help for command

```

All arguments can also be configured in the config file.
Expand All @@ -79,7 +133,12 @@ configuration file if you provide these environment variables.

- `HIVE_CDN_ENDPOINT`: The endpoint of the Hive Registry CDN
- `HIVE_CDN_KEY`: The API key provided by Hive Registry to fetch the schema
- `HIVE_REGISTRY_TOKEN`: The token to push the metrics to Hive Registry
- `HIVE_TARGET`: The target for usage reporting and observability in Hive Console
- `HIVE_USAGE_TARGET` (deprecated, use `HIVE_TARGET`): The target for usage reporting and
observability in Hive Console
- `HIVE_ACCESS_TOKEN`: The access token used for usage reporting and observability in Hive Console
- `HIVE_USAGE_ACCESS_TOKEN`: The access token used for usage reporting only in Hive Console
- `HIVE_TRACE_ACCESS_TOKEN`: The access token used for observability only in Hive Console

[Learn more about Hive Registry integration here](/docs/gateway/supergraph-proxy-source)

Expand Down
48 changes: 48 additions & 0 deletions packages/web/docs/src/content/api-reference/gateway-config.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -451,6 +451,54 @@ different phases of the GraphQL execution to manipulate or track the entire work

[See dedicated plugins feature page for more information](/docs/gateway/other-features/custom-plugins)

### `openTelemetry`

This options allows to enable OpenTelemetry integration and customize its behaviour.

[See dedicated Monitoring/Tracing fearure page for more information](/docs/gateway/monitoring-tracing)

#### `useContextManager`

Use standard `@opentelemetry/api` Context Manager to keep track of current span. This is an advanced
option that should be used carefully, as it can break your custom plugin spans.

#### `inheritContext`

If true (the default), the HTTP span will be created with the active span as parent. If false, the
HTTP span will always be a root span, which will create it's own trace for each request.

#### `propagateContext`

If true (the default), uses the registered propagators to propagate the active context to upstream
services.

#### `configureDiagLogger`

If true (the default), setup the standard `@opentelemetry/api` diag API to use the Hive Gatewat
logger. A child logger is created with the prefix `[opentelemetry][diag] `.

#### `flushOnDispose`

If truthy (the default), the registered span processor will be forcefully flushed when the Hive
Gateway is about to shutdown. To flush, the `forceFlush` method is called (if it exists), but you
can change the method to call by providing a string as a value to this option.

#### `traces`

Pass `true` to enable tracing integration with all spans available.

This option can also be an object for more fine grained configuration.

##### `tracer`

The `Tracer` instance to be used. The default is a tracer with the name `gateway`

#### `spans`

An object with each keys being a span name, and the value being either a boolean or a filtering
function to control which span should be reported.
[See Reported Spans and Events for details](/docs/gateway/monitoring-tracing#reported-spans).

### `cors`

[See dedicated CORS feature page for more information](/docs/gateway/other-features/security/cors)
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -18,17 +18,14 @@ So you can benefit from the powerful plugins of Fastify ecosystem with Hive Gate

## Example

In order to connect Fastify's logger to the gateway, you need to install the
`@graphql-hive/logger-pino` package together with `@graphql-hive/gateway-runtime` and `fastify`.

```sh npm2yarn
npm i @graphql-hive/gateway-runtime @graphql-hive/logger-pino fastify
npm i @graphql-hive/gateway-runtime @graphql-hive/logger fastify
```

```ts
import fastify, { type FastifyReply, type FastifyRequest } from 'fastify'
import { createGatewayRuntime } from '@graphql-hive/gateway-runtime'
import { createLoggerFromPino } from '@graphql-hive/logger-pino'
import { createGatewayRuntime, Logger } from '@graphql-hive/gateway-runtime'
import { PinoLogWriter } from '@graphql-hive/logger/writers/pino'

// Request ID header used for tracking requests
const requestIdHeader = 'x-request-id'
Expand All @@ -52,8 +49,10 @@ interface FastifyContext {
}

const gateway = createGatewayRuntime<FastifyContext>({
// Integrate Fastify's logger / Pino with the gateway logger
logging: createLoggerFromPino(app.log),
// Use Fastify's logger (Pino) with Hive Logger
logging: new Logger({
writers: [new PinoLogWriter(app.log)]
}),
// Align with Fastify
requestId: {
// Use the same header name as Fastify
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -105,7 +105,7 @@ You can then generate the supergraph file using the `mesh-compose` CLI from
npx mesh-compose supergraph
```

#### Compose supegraph with Apollo Rover
#### Compose supergraph with Apollo Rover

Apollo Rover only allow to export supegergraph as a GraphQL document, so we will have to wrap this
output into a JavaScript file:
Expand Down
Loading
Loading