Live coding a streaming ChatGPT proxy with Swift OpenAPI—from scratch!

Day 1 | 16:20 | 00:20 | K.4.401 | Si Beaumont, Honza Dvorsky


Note: I'm reworking this at the moment, some things won't work.

The stream isn't available yet! Check back at 16:20.
Get involved in the conversation!Join the chat

Join us as we build a ChatGPT client, from scratch, using Swift OpenAPI Generator. We’ll take advantage of Swift OpenAPI’s pluggable HTTP transports to reuse the same generated client to make upstream calls from a Linux server, providing end-to-end streaming, backed by async sequences, without buffering upstream responses.

In this session you’ll learn how to:

  • Generate a type-safe ChatGPT macOS client and use URLSession OpenAPI transport.
  • Stream LLM responses using Server Sent Events (SSE).
  • Bootstrap a Linux proxy server using the Vapor OpenAPI transport.
  • Use the same generated ChatGPT client within the proxy by switching to the AsyncHTTPClient transport.
  • Efficiently transform responses from SSE to JSON Lines, maintaining end-to-end streaming.