# Operational integration patterns

Source: https://docs.settlemint.com/docs/developer-guides/api-integration/operational-integration-patterns
Answer common integration questions about DALP event access, token discovery, upgrade operations, operational monitoring, and self-hosted deployment responsibilities.



Use this page when you are connecting DALP to an off-chain ledger, cap table service, analytics store, operations console, or customer platform.

DALP exposes integration surfaces through REST APIs, the generated OpenAPI specification, indexed token events, account activity reads, blockchain monitoring endpoints, and deployment operations. Event and query consumers should build from the public API contract rather than reading internal databases directly.

## Event access model [#event-access-model]

For token operations, use the token events collection:

```bash
curl --globoff "https://your-platform.example.com/api/v2/tokens/0xTOKEN/events?page[offset]=0&page[limit]=50&sort=-blockTimestamp" \
  -H "X-Api-Key: sm_dalp_xxxxxxxxxxxxxxxx"
```

The endpoint returns the canonical collection envelope:

* `data`: event items
* `meta`: total count and facet counts
* `links`: pagination links for the current query

The default sort is newest first by `blockTimestamp`. You can also sort by `blockNumber`. Supported filters include `eventName`, `senderAddress`, `accountAddress`, `walletAddress`, `transactionHash`, and `blockTimestamp` ranges.

Use pagination to backfill or replay reads for a token. DALP does not document Kafka as a public token-event delivery interface. If you need event-driven processing, use the REST collection for durable reads and the OpenAPI specification to generate a typed client. Live operation screens may use server-sent events where a specific endpoint documents a stream, such as blockchain monitoring snapshots or migration progress, but token events are consumed through the REST event collection.

Related pages:

* [Token holders and transfers](/docs/developer-guides/api-integration/token-holders-transfers#list-token-events)
* [API reference](/docs/developer-guides/api-integration/api-reference)

## Event payload fields [#event-payload-fields]

Token event items include the operational fields needed for ledger reconciliation, including block number, block timestamp, transaction hash, event name, emitting contract, sender, related account, amount, and event values when present.

Example shape:

```json
{
  "id": "evt_01j...",
  "eventName": "TransferCompleted",
  "blockNumber": "8154321",
  "blockTimestamp": "2026-05-01T11:59:30.000Z",
  "transactionHash": "0xbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb",
  "txIndex": "0",
  "emitter": { "id": "0x2000000000000000000000000000000000000002" },
  "sender": { "id": "0x3000000000000000000000000000000000000003" },
  "values": [
    {
      "id": "evt_01j...-account",
      "name": "account",
      "value": "0x3000000000000000000000000000000000000003"
    },
    { "id": "evt_01j...-amount", "name": "amount", "value": "500" }
  ]
}
```

Consumers should treat the OpenAPI response schema as the contract. Use exact transaction-hash filtering when reconciling one operation:

```bash
curl --globoff "https://your-platform.example.com/api/v2/tokens/0xTOKEN/events?filter[transactionHash][eq]=0xTRANSACTION_HASH" \
  -H "X-Api-Key: sm_dalp_xxxxxxxxxxxxxxxx"
```

## Ordering, idempotency, and replay [#ordering-idempotency-and-replay]

Token event reads are queryable, paginated collections. The default event order is newest first by `blockTimestamp`, with block and log metadata available for deterministic reconciliation. For deterministic replay jobs:

1. Scope each reader to one token address.
2. Use `blockTimestamp` windows or pagination for REST replay. The token events API includes `blockNumber` in each event item, but block-number range replay is an indexer capability rather than a public token-event query parameter.
3. Persist the last processed event identifier, timestamp, transaction hash, block number, and transaction index.
4. On resume, reread inclusively from the last processed timestamp or a bounded timestamp window.
5. Dedupe by event identifier plus transaction hash, block number, and transaction index.
6. Store processed transaction hash, block number, event name, event identifier, transaction index, and token contract address in your own ledger.

For mutation APIs that submit transactions, pass an `Idempotency-Key` header. DALP uses that key when queueing blockchain transactions so a retry can return the existing request or completed result instead of submitting the same transaction twice. Read-only event collection calls do not need an idempotency key; they should be replay-safe through persisted checkpoints and deduplication.

Mutation APIs can return transaction metadata synchronously or an async `statusUrl`, depending on the operation. If a confirmation timeout occurs, check transaction status before retrying. A successful on-chain transaction must not be submitted again.

Related pages:

* [Token holders and transfers](/docs/developer-guides/api-integration/token-holders-transfers#list-token-events)
* [Transaction tracking](/docs/developer-guides/operations/transaction-tracking)

## Chain finality, indexer health, and reindexing [#chain-finality-indexer-health-and-reindexing]

Use blockchain monitoring endpoints to verify whether DALP can read the chain reliably. The API reports chain RPC and indexer health, including sync lag, block age, finality lag, stall time, reindex status, and recent service state.

The indexer records block hashes and detects chain reorganizations by comparing indexed blocks with the canonical RPC chain. When a reorg is detected, DALP rolls indexed state back to the fork block and reprocesses affected blocks. Consumers should still keep their own ingestion idempotent because a previously read event can disappear or be replaced after rollback and reprocessing.

Operational endpoints include:

* `GET /api/v2/blockchain-monitoring/health-metrics/summary`
* `GET /api/v2/blockchain-monitoring/health-metrics/timeline`
* `GET /api/v2/blockchain-monitoring/service-health-metrics`
* `GET /api/v2/blockchain-monitoring/health-snapshots`
* `GET /api/v2/blockchain-monitoring/health-snapshots/stream`

The stream endpoint uses server-sent events for live operations screens. Snapshot events include `eventType`, `serviceType`, `chainId`, `networkName`, `status`, `blockHeight`, `chainHeadBlock`, `syncLag`, `finalityLagBlocks`, `stallSeconds`, and optional deployment state.

Related page:

* [Blockchain monitoring](/docs/developer-guides/operations/blockchain-monitoring)

## Token and class discovery [#token-and-class-discovery]

Use the API reference and token lifecycle guides for token creation and discovery. Token creation returns the deployed contract address. After issuance, integrations usually work from a known token contract address and then use token-specific endpoints for details, holders, events, features, metadata, compliance modules, transfer approvals, and denomination assets.

Relevant endpoints include:

* `GET /api/v2/tokens` for token discovery across the platform, including free-text list search with `filter[q]=...` and field filters such as token factory, token type, name, symbol, and creation date
* `GET /api/v2/tokens/{tokenAddress}` for token details
* `GET /api/v2/tokens/{tokenAddress}/holders` for the current holder balance collection
* `GET /api/v2/tokens/{tokenAddress}/holder-balances` with `holderAddress` for one holder's balance
* `GET /api/v2/tokens/{tokenAddress}/events` for indexed token events
* `GET /api/v2/tokens/{tokenAddress}/features` for attached token features
* `GET /api/v2/tokens/{tokenAddress}/metadata` for token metadata entries
* `GET /api/v2/tokens/{tokenAddress}/compliance-modules` for compliance configuration
* `GET /api/v2/tokens/{tokenAddress}/transfer-approvals` for transfer approval records
* `GET /api/v2/system/factories` for token factory discovery on the active system
* `GET /api/v2/settings/asset-class-definitions` for configured asset class definitions

DALP does not expose a single public issuer-to-contract registry endpoint in the current API. If an integration models each share class as a separate token contract, keep the issuer-to-token mapping in the integrating system and reconcile it with token and factory discovery endpoints. Class metadata can be collected during asset creation through instrument templates, asset class definitions, and token metadata fields. Metadata mutability depends on permissions and the supported metadata update flow.

Related pages:

* [API reference](/docs/developer-guides/api-integration/api-reference)
* [Token lifecycle](/docs/developer-guides/api-integration/token-lifecycle)
* [Instrument templates](/docs/user-guides/asset-creation/instrument-templates)
* [Asset detail workspace](/docs/user-guides/asset-servicing/asset-detail-workspace)

## Historical balances, snapshots, and servicing records [#historical-balances-snapshots-and-servicing-records]

Use holder, event, and feature reads to reconcile cap-table and servicing systems. DALP exposes current token holders and indexed token events through the token API.

The current public API does not expose a token balance-at-block endpoint or a dividend-specific record-date snapshot endpoint. For dividend or record-date workflows, store the record date and block reference in the off-chain servicing system, then build the required snapshot from token events and current holder reconciliation. Keep the snapshot inputs and replay checkpoint so the calculation can be reproduced and audited.

Related pages:

* [Lifecycle after issuance](/docs/architecture/start-here/lifecycle-after-issuance)
* [Token lifecycle](/docs/developer-guides/api-integration/token-lifecycle#feature-operations-runbook)
* [Token holders and transfers](/docs/developer-guides/api-integration/token-holders-transfers)

## Upgrade operations and compatibility [#upgrade-operations-and-compatibility]

DALP includes a guided system upgrade workflow for keeping deployed system contracts aligned with the latest implementations available for the active network. The workflow compares deployed system components with the network directory, shows which components differ, and runs the upgrade with live progress.

Relevant endpoints include:

* `GET /api/v2/system/migration/compare` for directory-versus-deployed comparison
* `POST /api/v2/system/migration/start` to start a migration or upgrade workflow
* `GET /api/v2/system/migration/active` to check active migration state
* `GET /api/v2/system/migration/{migrationId}/stream` for live migration progress

Only accounts with the required system-management permission can run upgrades. The API accepts platform admins or wallets with the system manager or admin role on the indexed system. Completed on-chain steps remain applied if a later step fails, so operators review the comparison before starting and use the retry flow after fixing the reported issue.

System contract upgrades emit implementation-update events such as `ImplementationUpdated` and `BatchImplementationsSet` for system implementation changes. Integrations should still rely on the API and OpenAPI schema as the compatibility contract, because contract events are low-level operational evidence rather than a substitute for the API contract.

For integration compatibility:

* Generate clients from the current OpenAPI specification.
* Treat OpenAPI response schemas as the public API contract.
* Read token features before calling feature-specific routes.
* Reconcile events and transaction status after upgrade or migration work.
* Use blockchain monitoring deployment state when an indexer is rebuilding.

Related pages:

* [System upgrades](/docs/developer-guides/operations/system-upgrades)
* [API reference](/docs/developer-guides/api-integration/api-reference)
* [Blockchain monitoring](/docs/developer-guides/operations/blockchain-monitoring)

## Self-hosted deployment and operations [#self-hosted-deployment-and-operations]

Self-hosted DALP deployments run on Kubernetes or OpenShift through Helm charts. For Azure AKS, the self-hosting prerequisites call for managed PostgreSQL, managed Redis, object storage, backups, and managed observability unless an approved self-hosted fallback is used.

The Helm charts expose replica counts and placement controls for API and worker services. The indexer runs as its own workload and is intentionally single-replica with a recreate update strategy, so scaling and isolation should be designed around workload placement, database capacity, RPC capacity, queue throughput, and operational monitoring rather than horizontal indexer replicas.

SettleMint leads the initial installation when the prerequisites are complete. Long-term ownership of Helm upgrades, vulnerability patching, uptime monitoring, backups, and platform operations should be agreed during the deployment handover. SettleMint can also operate the environment through an agreed control-plane-managed model, depending on the commercial and operational scope. The environment must provide metrics, logs, traces, and alerting through the cloud provider or an approved managed observability stack.

Related pages:

* [Self-hosting prerequisites](/docs/architecture/self-hosting/prerequisites)
* [Blockchain monitoring](/docs/developer-guides/operations/blockchain-monitoring)

## Private keys and secrets [#private-keys-and-secrets]

DALP uses Key Guardian for private-key protection. Key Guardian supports multiple storage tiers, including encrypted database storage, cloud secret managers, hardware security modules, and third-party custody providers such as DFNS and Fireblocks.

Production deployments should use the storage tier approved for the asset value and regulatory posture. Key Guardian receives signing requests without exposing raw key material, routes the request to the configured backend, and logs key generation, signature requests, rotation, and access denials for security review.

Related pages:

* [Key Guardian](/docs/architecture/components/infrastructure/key-guardian)
* [Custody providers](/docs/architecture/integrations/custody-providers)
* [Signing flow](/docs/architecture/flows/signing-flow)

## Rate limits and throughput [#rate-limits-and-throughput]

Authentication endpoints and API-key authentication have explicit rate-limit controls. Current defaults include these authentication limits:

| Surface                                            | Default limit                  |
| -------------------------------------------------- | ------------------------------ |
| Email sign-in                                      | 5 requests per 60 seconds      |
| Email sign-up                                      | 3 requests per 60 seconds      |
| Password reset request (`/request-password-reset`) | 3 requests per 60 seconds      |
| Other core authentication endpoints                | 100 requests per 60 seconds    |
| Wallet verification endpoints                      | 100 requests per 10 seconds    |
| API-key authentication                             | 10,000 requests per 60 seconds |

Authentication rate limits use shared database storage so counters apply across multiple API replicas. DALP trusts `x-real-ip` for rate-limit attribution; the accepted header is fixed to `x-real-ip` in the auth server configuration.

Configure real-client-IP attribution before exposing authentication endpoints:

* For nginx-ingress, enable the real-IP module in the controller ConfigMap, set the real-IP source header from the trusted upstream edge proxy, keep the trusted proxy CIDR list current, and verify the controller overwrites `X-Real-IP` before forwarding to DALP. A typical checklist is `enable-real-ip: "true"`, `real-ip-header` set to the trusted upstream header, and `proxy-real-ip-cidr` restricted to the load balancer or upstream proxy ranges.
* For Traefik, configure trusted forwarded-header sources only for the entry point that receives traffic from the managed load balancer, then add a router middleware or upstream edge rule that sets `X-Real-IP` to the validated client IP before the request reaches the DALP service. Do not pass through an existing client-supplied `X-Real-IP` value.
* For Gateway API, Envoy, or OpenShift Routes, do not assume the DALP route object rewrites `X-Real-IP`. Add an equivalent trusted header rewrite in the Gateway policy, Envoy filter, OpenShift router configuration, or upstream edge proxy before forwarding traffic to DALP. If that rewrite is not configured, authentication rate-limit counters can be attributed to the gateway, router, or proxy IP instead of the client IP.
* Validate the deployment by sending requests through the public route and confirming the API service receives `x-real-ip` as the original client IP. Do not rely on other forwarded headers, or client-supplied values, for rate-limit attribution because those values can be missing, incorrect, or spoofable.

DALP does not publish a universal transactions-per-second number for every deployment. Throughput depends on the selected chain, RPC provider, custody backend, queue configuration, infrastructure sizing, bundler settings, and operation mix.

Use load testing against the target environment and agree production rate limits during implementation. For self-hosted environments, scale decisions should follow the Kubernetes, database, Redis, ingress, and observability baselines in the self-hosting prerequisites.

Related pages:

* [API reference](/docs/developer-guides/api-integration/api-reference)
* [Self-hosting prerequisites](/docs/architecture/self-hosting/prerequisites)

## SLA and operational addenda [#sla-and-operational-addenda]

The API and charts provide operational health surfaces, probes, telemetry, and monitoring endpoints, including `/health`, `/livez`, service readiness endpoints, OpenTelemetry export, blockchain monitoring health metrics, service health metrics, health snapshots, and snapshot streams.

SLA terms are not defined by the public API documentation. Treat uptime, support response times, maintenance windows, backup responsibilities, and incident processes as part of the SLA addendum or managed-service agreement for the deployment.

Related pages:

* [Blockchain monitoring](/docs/developer-guides/operations/blockchain-monitoring)
* [Self-hosting prerequisites](/docs/architecture/self-hosting/prerequisites)

## Recommended off-chain ledger pattern [#recommended-off-chain-ledger-pattern]

Use an append-only mirror in the integrating system. Store enough identifiers to make ingestion idempotent and auditable:

* token contract address
* event identifier
* event name
* block number
* block timestamp
* transaction hash
* sender address
* account or wallet address
* amount and value fields
* ingestion timestamp
* source API timestamp window and replay checkpoint

Rebuild the mirror from the token events collection when needed, and use holder reads for current balance reconciliation. Do not treat the mirror as the source of truth for token ownership. DALP and the chain remain the authoritative execution layer.
