Cache your OpenAI responses with CachedGPT

Check out this github repo: https://github.com/theptrk/cachedgpt

It’s a single file server that uses FastAPI to cache your calls to save on resources while you are developing.

A hidden benefit, all responses return INSTANTLY on the second call.