Building a Python SDK Client

Create a lightweight wrapper around a REST API.
python
claude
sdk
Published

May 3, 2026

The boundaries of an SDK vs. any other package wasn’t initially clear to me. Both are installable collections of pre-built code components allowing for faster development. However, while some packages are a general utility (think pandas), an SDK often has a service behind it, including commercial ones (think boto3).

My NPS Hikes project is a FastAPI REST API backed by a PostGIS database, deployed on Render. Technical writers often document SDKs in addition to APIs. Therefore, with Claude’s help, I built an SDK as another client of the Render deployment.

My SDK

I created the SDK following a planning session with Claude. We discussed architecture choices (PyPI, private registries, monorepo), and settled on creating a separate client GitHub repository for a Python SDK.

The separate repo makes the server-client relationship explicit. The SDK doesn’t depend on the server codebase. It depends only on the OpenAPI spec as the single source of truth.

Fundamentally, my SDK is a convenience layer over a network call. It wraps an HTTP API so that developers can interact with Python objects and methods instead of constructing URLs, parsing JSON, and handling status codes.

The SDK’s value is transforming unstructured JSON into structured Python objects. Therefore, it only wraps the six data endpoints into six Python methods. It leaves out the visualization endpoints, which return images and HTML pages. It also excludes the NLQ endpoint (which isn’t part of the Render deployment).

SDK advantages

A small code sample makes the advantages of the SDK clearer. For example, to find trails I’ve hiked in Yosemite, using the API, I’d need to construct HTTP requests according to the documentation:

import requests

response = requests.get(
    "https://seanangio-nps-hikes.onrender.com/trails",
    params={"park_code": "yose", "hiked": True}
)
response.raise_for_status()
data = response.json()

The data object returned is a dict that I’d need to explore using the API documentation. On the other hand, the SDK eliminates URL construction, parameter name lookups, and JSON parsing. Using the SDK, I can get the same data with:

from nps_hikes import Client

client = Client()

trails = client.get_trails(park_code="yose", hiked=True)

Instead of a dictionary though, the trails object belongs to the class nps_hikes.models.TrailsResponse. With a typed Python object, the user gets IDE autocomplete and validation. The SDK also returns more instructive errors compared to HTTP status codes.

SDK version management

The choice of a separate repository for the SDK means that the server and the SDK have their own distinct version histories and dependencies.

One challenge this creates is the necessity to keep the SDK response models synced with those of the server. My answer in this case was to use datamodel-code-generator to regenerate SDK models from the OpenAPI spec.

The workflow I’ve adopted is that if the data endpoints on the server API change, then the OpenAPI spec updates. From that updated spec, I can regenerate the SDK models, run tests, bump the SDK version, and release it. To avoid drift, the SDK includes a weekly GitHub Actions workflow for syncing the spec. It fetches the live OpenAPI spec and compares it to the previous snapshot. From there, it generates new models, cleans up extraneous schema, runs tests, and even creates a PR for review.

One nice benefit is that this approach scales naturally. The same OpenAPI spec could generate SDKs for other programming languages.