Logging
Hive Gateway uses the Hive Logger for logging about the Gateway’s lifecycle, errors,
and other events. The default logger uses JavaScript’s
console
API, but you can also provide
a custom logger implementation. By default, Hive Gateway logs the critical masked errors so that the
sensitive information is not exposed to the client.
The Hive Logger is a powerful tool with many features. You can learn more about it in the Hive Logger documentation.
Using the Logger
The log
prop is now used in all APIs, contexts, and plugin options. It’s short and intuitive,
making it easier to understand and use.
Context
The context object passed to plugins and hooks will always have the relevant logger instance
provided throug the log
property. Same goes for all of the transports’ contexts. Each of the
transport contexts now has a log
prop.
Plugin Setup Function
The log
property in the plugin setup function contains the root-most logger instance.
import { defineConfig } from '@graphql-hive/gateway'
import { myPlugins } from './my-plugins'
export const gatewayConfig = defineConfig({
plugins(ctx) {
ctx.log.info('Loading plugins...')
return [...myPlugins]
}
})
Plugin Hooks
Across all plugins, hooks and contexts, the log
property will always be provided.
It is now the highly recommended to use the logger from the context at all times because it contains the necessary metadata for increased observability, like the request ID or the execution step.
import { defineConfig } from '@graphql-hive/gateway';
export const gatewayConfig = defineConfig({
- plugins({ log }) {
+ plugins() {
return [
{
onExecute({ context }) {
- log.info('Executing...');
+ context.log.info('Executing...');
},
onDelegationPlan(context) {
- log.info('Creating delegation plan...');
+ context.log.info('Creating delegation plan...');
},
onSubgraphExecute(context) {
- log.info('Executing on subgraph...');
+ context.log.info('Executing on subgraph...');
},
onFetch({ context }) {
- log.info('Fetching data...');
+ context.log.info('Fetching data...');
},
},
];
},
});
Will log with the necessary metadata for increased observability, like this:
2025-04-10T14:00:00.000Z INF Executing...
requestId: "0b1dce69-5eb0-4d7b-97d8-1337535a620e"
2025-04-10T14:00:00.000Z INF Creating delegation plan...
requestId: "0b1dce69-5eb0-4d7b-97d8-1337535a620e"
subgraph: "accounts"
2025-04-10T14:00:00.000Z INF Executing on subgraph...
requestId: "0b1dce69-5eb0-4d7b-97d8-1337535a620e"
subgraph: "accounts"
2025-04-10T14:00:00.000Z INF Fetching data...
requestId: "0b1dce69-5eb0-4d7b-97d8-1337535a620e"
Log Levels
The default logger uses the info
log level which will make sure to log only info
+ logs.
Available log levels are:
- false (disables logging altogether)
trace
debug
info
defaultwarn
error
Change on Start
The logging
option during Hive Gateway setup accepts:
true
to enable and log using theinfo
levelfalse
to disable logging altogether- A Hive Logger instance
- A string log level (e.g.,
debug
,info
,warn
,error
)
import { defineConfig } from '@graphql-hive/gateway'
export const gatewayConfig = defineConfig({
logging: 'debug'
})
Change Dynamically
A powerful ability of Hive Logger is allowinh you to change the log level dynamically at runtime.
This is useful for debugging and testing purposes. You can change the log level by calling the
setLogLevel
method on the logger instance.
Lets write a plugin that toggles the debug
log level when a secure HTTP request is made on the
/toggle-debug
path.
Please be very carefuly with securing your logger. Changing the log level from an HTTP request can be a security risk and should be avoided in production environments. Use this feature with caution and proper security measures.
import { GatewayPlugin, Logger } from '@graphql-hive/gateway'
interface ToggleDebugOptions {
/**
* A secret value that has to be provided alongside the
* request authenticating its origin.
*/
secret: string
/**
* The root most logger, all of the child loggers will
* inherit its log level.
*/
rootLog: Logger
}
export function useToggleDebug(opts: ToggleDebugOptions): GatewayPlugin {
return {
onRequest({ request }) {
if (!request.url.endsWith('/toggle-debug')) {
return
}
const secret = request.headers.get('x-toggle-debug-secret')
if (secret !== opts.secret) {
return
}
// request is authenticated, we can change the log level
if (opts.rootLog.level === 'debug') {
opts.rootLog.setLevel('info')
} else {
opts.rootLog.setLevel('debug')
}
opts.rootLog.warn('Log level changed to %s', opts.rootLog.level)
}
}
}
And finally use the plugin with Hive Gateway:
import { defineConfig } from '@graphql-hive/gateway'
import { useToggleDebug } from './toggle-debug'
export const gatewayConfig = defineConfig({
plugins(ctx) {
return [
useToggleDebug({
secret: 'wow-very-much-secret',
// the plugins factory function provides the root logger,
// all of the child loggers will inherit its log level
rootLog: ctx.log
})
]
}
})
Finally, issue the following request to toggle the debug log level:
curl -H 'x-toggle-debug-secret: wow-very-much-secret' \
http://localhost:4000/toggle-debug
Writing Logs in JSON format
By default, Hive Gateway prints the logs in human readable format. However, in production environments where you use tools for consuming the logs, it’s advised to print logs in JSON format.
Toggle with Environment Variable
To enable JSON logs, pass the LOG_JSON=1
as an environment variable to enable the
JSON writer.
Use the JSON Log Writer
You can also use the JSON writer directly in your configuration.
import { defineConfig, JSONLogWriter, Logger } from '@graphql-hive/gateway'
export const gatewayConfig = defineConfig({
logging: new Logger({ writers: [new JSONLogWriter()] })
})
Pretty Printing JSON
When using the JSON writer (either by toggling it using the environment variable or using the JSON
writer directly), you can use the LOG_JSON_PRETTY=1
environment variable to enable pretty-printing
the JSON logs.
Custom Logger Writers
The new Hive Logger is designed to be extensible and allows you to create custom logger adapters by
implementing “log writers” instead of the complete logger interface. The LogWriter
is simply:
import { Attributes, LogLevel } from '@graphql-hive/logger'
interface LogWriter {
write(
level: LogLevel,
attrs: Attributes | null | undefined,
msg: string | null | undefined
): void | Promise<void>
}
As you may see, it’s very simple and allows you, to not only use your favourite logger like pino or winston, but also implement custom writers that send logs to a HTTP consumer or writes to a file.
Read more about implementing your own writers in the Hive Logger documentation.
Daily File Log Writer (Node.js Only)
Here is an example of a custom log writer that writes logs to a daily log file. It will write to a file for each day in a given directory.
import fs from 'node:fs/promises'
import path from 'node:path'
import { Attributes, jsonStringify, LogLevel, LogWriter } from '@graphql-hive/logger'
export class DailyFileLogWriter implements LogWriter {
constructor(
private dir: string,
private name: string
) {}
write(level: LogLevel, attrs: Attributes | null | undefined, msg: string | null | undefined) {
const date = new Date().toISOString().split('T')[0]
const logfile = path.resolve(this.dir, `${this.name}_${date}.log`)
return fs.appendFile(logfile, jsonStringify({ level, msg, attrs }))
}
}
And using it as simple as pluging it into an instance of Hive Logger to the logging
option:
import { defineConfig, JSONLogWriter, Logger } from '@graphql-hive/gateway'
import { DailyFileLogWriter } from './daily-file-log-writer'
export const gatewayConfig = defineConfig({
logging: new Logger({
// you can combine multiple writers to log to different places
writers: [
// this will log to the console in JSON format
new JSONLogWriter(),
// and this is our daily file writer
new DailyFileLogWriter('/var/log/hive', 'gateway')
]
})
})
Pino (Node.js Only)
Use the Node.js pino
logger library for writing Hive Logger’s
logs.
pino
is an optional peer dependency, so you must install it first.
npm i pino pino-pretty
Since we’re using a custom log writter, you have to install the Hive Logger package too:
npm i @graphql-hive/logger
import pino from 'pino'
import { defineConfig } from '@graphql-hive/gateway'
import { Logger } from '@graphql-hive/logger'
import { PinoLogWriter } from '@graphql-hive/logger/writers/pino'
const pinoLogger = pino({
transport: {
target: 'pino-pretty'
}
})
export const gatewayConfig = defineConfig({
logging: new Logger({
writers: [new PinoLogWriter(pinoLogger)]
})
})
Winston (Node.js Only)
Use the Node.js winston
logger library for writing Hive
Logger’s logs.
winston
is an optional peer dependency, so you must install it first.
npm i winston
Since we’re using a custom log writter, you have to install the Hive Logger package too:
npm i @graphql-hive/logger
import { createLogger, format, transports } from 'winston'
import { defineConfig } from '@graphql-hive/gateway'
import { Logger } from '@graphql-hive/logger'
import { WinstonLogWriter } from '@graphql-hive/logger/writers/winston'
const winstonLogger = createLogger({
level: 'info',
format: format.combine(format.timestamp(), format.json()),
transports: [new transports.Console()]
})
export const gatewayConfig = defineConfig({
logging: new Logger({
writers: [new WinstonLogWriter(winstonLogger)]
})
})
Error Handling
Error Codes
To help with debugging and improve error understanding for consumers, Hive Gateway uses error codes for the following specific types of errors:
Code | Description |
---|---|
GRAPHQL_PARSE_FAILED | Sent GraphQL Operation cannot be parsed |
GRAPHQL_VALIDATION_FAILED | Sent GraphQL Operation is not validated against the schema |
BAD_USER_INPUT | Variable or argument values are not valid in the GraphQL parameters |
TIMEOUT_ERROR | Indicates a timeout in the subgraph execution. Keep in mind that this timeout is not always an HTTP timeout or a timeout specified by you. It might be the subgraph server that timed out. Learn more about upstream reliability to configure timeout based on your needs. |
SCHEMA_RELOAD | When Hive Gateway updates the schema by polling or any other way, all ongoing requests are terminated, including subscriptions and long-running defer/stream operations. In this case, this error is sent to the client to indicate a schema change. Usually, a retry is expected in this case. |
SHUTTING_DOWN | When Hive Gateway is shutting down or restarting, like SCHEMA_RELOAD , it aborts all requests and notifies the client with this error code. After a certain amount of time, a retry can be sent. |
UNAUTHENTICATED | The given auth credentials are not valid. Check the logs and documentation of the used auth plugin to learn more. |
PERSISTED_QUERY_NOT_FOUND | Indicates that persisted operation information is not found in the store. Check the related persisted operation plugin docs to learn more about this error. |
INTERNAL_SERVER_ERROR | Indicates that the error is unexpected or unspecified and masked by the gateway. It is probably caused by an unexpected network, connection, or other runtime error. You can see the details of this error in the logs. |
DOWNSTREAM_SERVICE_ERROR | Indicates the error is subgraph-related, and generated by the subgraph, not the gateway |
COST_ESTIMATED_TOO_EXPENSIVE | Indicates that the cost of the operation is too expensive and exceeds the configured limit. See more about cost limiting (demand control) |
Error Masking
Hive Gateway masks internal server errors by default to prevent leaking sensitive information to the client. But without any codes all errors in the result are considered safe from the subgraph.
Understanding this concept is crucial for building secure applications.
So any HTTP errors, network errors, or any other errors that are not related to the subgraph are masked by default. But any errors sent by the subgraph are not masked by default.
All masked errors are replaced with a generic error message and the original error is not exposed to the client.
{
"errors": [
{
"message": "Unexpected error.",
"code": "INTERNAL_SERVER_ERROR",
"locations": [
{
"line": 2,
"column": 3
}
],
"path": ["greeting"]
}
],
"data": null
}
But if the subgraph sends an error in the result, it is forwarded with DOWNSTREAM_SERVICE_ERROR
if
INTERNAL_SERVER_ERROR
code isn’t passed.
{
"errors": [
{
"message": "This error is from subgraph",
"code": "DOWNSTREAM_SERVICE_ERROR",
"locations": [
{
"line": 2,
"column": 3
}
],
"path": ["greeting"]
}
],
"data": null
}
When INTERNAL_SERVER_ERROR
is passed from the subgraph, it is masked by the gateway and sent to
the client as INTERNAL_SERVER_ERROR
. But the gateway will still log the original error.
Disabling masking for debugging
For debugging purpose, exposing errors to the client can be needed depending on your architecture.
Error masking can be disabled using the maskedErrors
option:
import { defineConfig } from '@graphql-hive/gateway'
export const gatewayConfig = defineConfig({
maskedErrors: false
})
Receive original error in development mode
When developing locally seeing the original error within your Chrome Dev Tools might be handy for
debugging. You might be tempted to disable the masked errors via the maskedErrors
config option,
however, we do not recommend that at all.
Maintaining consistent behavior between development and production is crucial for not having any surprises in production. Instead, we recommend enabling the Hive Gateway development mode.
To do this you need to start Hive with the NODE_ENV
environment variable set to "development"
.
On unix and windows systems the environment variable can be set when starting the server.
NODE_ENV=development hive-gateway supergraph MY_SUPERGRAPH
{
"errors": [
{
"message": "Unexpected error.",
"locations": [
{
"line": 2,
"column": 3
}
],
"path": ["greeting"],
"extensions": {
"originalError": {
"message": "request to http://localhost:9876/greeting failed, reason: connect ECONNREFUSED 127.0.0.1:9876",
"stack": "FetchError: request to http://localhost:9876/greeting failed, reason: connect ECONNREFUSED 127.0.0.1:9876\n at ClientRequest.<anonymous> ***"
}
}
}
],
"data": null
}