The latest PostgreSQL release introduced a new authentication method: OAuth. This brings PostgreSQL closer to enterprise SSO systems, while opening new discussions around design and security considerations. I spent some time exploring how Postgres implemented this method and the tradeoffs involved. This post summarizes how OAuth works in PostgreSQL, steps I took to set it up using Keycloak, along with some practices to follow.

Here are a few points for the reader:

  • OAuth 2.0 is an authorization framework, not authentication. In practice, PostgreSQL uses OAuth tokens for both authentication and initial authorization. When using external providers, you’re effectively leveraging OpenID Connect (OIDC), an authentication layer built on top of OAuth 2.0.
  • This is not an introduction to OAuth 2.0. Please refer to this RFC instead.
  • The validator and related changes are done purely for demonstration purposes and shouldn’t be used in production.
  • I’m still learning OAuth and OpenID Connect. If you spot any errors, I’d love your feedback!

OAuth 2.0 implementation in PostgreSQL

Authentication Flow

The oauth method implements the SASL OAUTHBEARER mechanism for implementing OAuth-based authentication. This method uses a bearer token issued by an OpenID Connect or OAuth 2.0 provider. Here is how the flow works for authentication:

  1. Client requests an Access Token: When a client (like psql) tries connecting without a cached token, the server sends an AuthenticationSASLContinue message containing an error status, a well-known URI and scopes that client should use to conduct an OAuth flow. The initial connection attempt fails by design. The client then completes an OAuth flow to retrieve a bearer token. OAUTHBEARER doesn’t specify any method for fetching tokens. However, the default method implemented by libpq and used in psql is Device Authorization flow. It can be customized using PQAUTHDATA_OAUTH_BEARER_TOKEN hook.
  2. Client reconnects using OAUTHBEARER mechanism: With the bearer token received, client initiates a connection again. This time, the auth field in message contains the bearer token. Server validates this token. If client is authorized to connect, connection succeeds.

The client token validation is delegated to the validator module, a shared library that is responsible for validation of the token, and determining if the client is authorized to connect.

Authorization

By authorization, I’m referring to the initial authorization - whether the OAuth client is allowed to initiate a connection with the server. Authentication and the initial authorization happen in the same step. The validator module is responsible for both validating the token and determining the client’s effective DB role. It checks if the following conditions are satisfied:

  • The token is a valid bearer token issued by a trusted party
  • The token grants sufficient permissions for the client to access PostgreSQL server
  • Effective database role that the client should use

Once the validator returns this final identifier, the database server compares it to the configured pg_ident.conf mappings to determine if the client is allowed to connect. A token may also present itself with information on which organizational groups it is a part of, or the list of roles it may assume. In that case, username mapping can be delegated directly to the validator. The HBA rule line must then be configured with delegate_ident_mapping. This architectural complexity comes at a cost though: the validator becomes part of the trusted authentication path and a potential single point of failure.

Setting up OAuth in PostgreSQL 18 (with Keycloak)

To setup OAuth, following components are required:

  • Identity Provider like Keycloak that issues bearer tokens
  • PostgreSQL Client supporting v3 protocol.
  • Database server built with OAuth support
  • Validator module to verify tokens, determine user identifier and ultimately authorize client

I’m intentionally keeping Keycloak setup minimal here since the focus of this post is PostgreSQL’s implementation rather than identity provider’s config.

Installing PostgreSQL 18 and OAuth dependencies

# Installing PostgreSQL
sudo apt install -y postgresql-common
sudo /usr/share/postgresql-common/pgdg/apt.postgresql.org.sh -y
ffdsudo apt install postgresql-18
# Dependencies
sudo apt install -y libpq-oauth postgresql-server-dev-18 libkrb5-dev build-essential libcurl4-openssl-dev libjwt-dev

Depending on your configuration, some of these dependencies might not be needed. For example, libjwt-dev, which is useful only if you’re using JWTs.

Keycloak

I used the default setup of Keycloak in docker. Here are the steps:

  • Setup Keycloak
  • Create a new client. Enable OAuth 2.0 Device Authorization Grant flow
  • Store credentials to be used during login

Validator

PostgreSQL doesn’t validate tokens itself — it delegates this step to a validator module, a shared library you provide. This design allows organizations to define custom validation logic (e.g., decoding JWTs, mapping roles, verifying claims).

Since this was the first time I wrote an extension and worked with OAuth, I kept a really simple version of the validator that authorizes all users. This example implements the minimum required validator to understand the control flow and integration points:

#include "postgres.h"
#include "fmgr.h"
#include <stdio.h>
#include "libpq/oauth.h"
#include <curl/curl.h>

PG_MODULE_MAGIC;

static void ValidatorStartup(ValidatorModuleState *state) {}

static void ValidatorShutdown(ValidatorModuleState *state) {}

static bool ValidatorValidate(
    const ValidatorModuleState *state,
    const char *token,
    const char *role,
    ValidatorModuleResult *result
) {
    // In a real validator, you'd decode and verify the JWT here.
    // For now, just authorize everything.
    result->authorized = true;
    // Allow client to use whatever role has been provided
    // In prod, validate if client is authorized to login with this role
    result->authn_id = pstrdup(role);
    return true;
}


static OAuthValidatorCallbacks callbacks = {
    .magic = PG_OAUTH_VALIDATOR_MAGIC,
    .startup_cb = ValidatorStartup,
    .shutdown_cb = ValidatorShutdown,
    .validate_cb = ValidatorValidate
}; 

const OAuthValidatorCallbacks *_PG_oauth_validator_module_init(void) {
    return &callbacks;
}

This validator is a very simple one and shouldn’t be used in production. Please ensure you build a validator suited for your organization as even the smallest misconfiguration can have disastrous consequences.

To create a shared library and copy it at the right place, I used the following Makefile:

MODULE_big = validator
OBJS = validator.o
PG_CONFIG = pg_config
PGXS := $(shell $(PG_CONFIG) --pgxs)
include $(PGXS)
SHLIB_LINK += -lcurl
SHLIB_LINK += -ljwt

The Makefile builds the shared library and installs it to PostgreSQL’s module directory. To build and install correctly, use the following command:

make install

Once the validator has been installed, we need to tell PostgreSQL server to use this validator, setup user mapping in pg_ident.conf, and create the OAuth rule in pg_hba.conf.

Add Validator in postgresql.conf

Add the following line in the file:

oauth_validator_libraries = 'validator'

The name must match the MODULE_big value provided in Makefile.

Once the validator has been built, PostgreSQL needs to know:

  1. Which validator to use
  2. How to map OAuth identities to local users
  3. How to apply OAuth method in authentication rules This is done through pg_ident.conf and pg_hba.conf.

Add user mapping in pg_ident.conf

For testing out OAuth, I used following mapping:

oauth_map    /^(.+)$    postgres

This mapping is intentionally unsafe. It allows any user/ client to login as the database superuser, and hence, shouldn’t be used outside local experimentation.

Update pg_hba.conf

The config present in authentication rule in pg_hba.conf depends on the authenticator used.

host    all             all             127.0.0.1/32            oauth  map=oauth_map issuer=http://127.0.0.1:8080/realms/postgres validator=validator scope="openid profile email"

Login

With these configurations set, we can try logging in:

PGOAUTHDEBUG=UNSAFE psql "host=127.0.0.1 dbname=postgres user=postgres \
  oauth_issuer=http://127.0.0.1:8080/realms/postgres \
  oauth_client_id={client} \
  oauth_client_secret={secret} \
  oauth_scope='openid profile email'"

I’m using PGOAUTHDEBUG=UNSAFE for local setup because OAuth doesn’t allow http connections by default.

Best Practices for OAuth Validator Module

When using OAuth, PostgreSQL delegates token validation and identity mapping to the validator, making it security critical code. Hence, even the smallest misconfiguration could potentially lead to privilege escalation. Here are a few practices to follow when building security-critical applications:

1. Token Confidentiality

The module must never write the token or its parts to any log or return it back to the user. The verification issues should not be logged at ERROR/ FATAL level to prevent information from being accidentally leaked to the client.

2. Validate Token Data and Expiry

The information present in JWT token, such as iss claim must be checked against trusted data maintained internally. The tokens must enforce expiration to prevent token misuse by accidental leaks.

3. Prefer claim-based mapping over regex

While pg_ident.conf allows using pseudonymous user mapping, it is intentionally dangerous. Instead, database roles mapped on token claims like email, preferred username, groups or custom roles would be more useful. This claim based membership can also help authorize the client with the given database role, adding an extra layer of security over valid tokens.