As you take your plugin from development to production, there are many considerations to keep in mind; however, few of these considerations are as important as security. Securing your ChatGPT plugin is perhaps the most important step you’ll need to take in order to provide robust, continued access to your plugin users. In this post, we’ll cover the steps you should take to securing your ChatGPT plugin APIs.
The first step you should take in securing your plugin API is in protecting your plugin endpoints. Specifically, we recommend implementing a separate API specific to your plugin---even if your application has an existing API. These endpoints serve as a proxy for users interacting with your API via the plugin versus other platforms. There are several conveniences this distinction affords; however, one is that you can implement logic to ensure your plugin API is only ever accessible from OpenAI’s servers.
According to OpenAI’s plugin documentation, ChatGPT calls plugins in the CIDR block 23.102.140.112/28
. You should explicitly allowlist traffic to your plugin API from IP addresses within this CIDR block. You can do this at the application-level or network-level. Most web frameworks provide the ability to implement custom middleware which can handle this logic.
For example, with FastAPI, you can implement an allowlist with the following middleware:
@app.middleware('http')
async def validate_ip(request: Request, call_next):
ip = str(request.client.host)
if ip not in ALLOWED_IPS:
data = {
'message': f'IP {ip} is not allowed to access this resource.'
}
return JSONResponse(status_code=status.HTTP_400_BAD_REQUEST, content=data)
# Proceed if IP is allowed
return await call_next(request)
You can build the allowed IP list from the given CIDR block using:
import ipaddress
ALLOWED_IPS = [str(ip) for ip in ipaddress.IPv4Network('23.102.140.112/28')]
This is a simplified example which doesn’t consider if you’re running your application behind a reverse proxy.
Another approach is to secure your application at the network-level. For example, you can use GCP or another cloud provider to configure firewall rules which only allow traffic from ChatGPT’s IP range.
In addition to setting up rules to block traffic to your API from non-ChatGPT users, you should implement some form of authentication. Plugins offer a number of authentication mechanisms. At a minimum, we recommend service-level authentication. With service-level authentication, you provide a client secret to ChatGPT on the first deployment or installation of your plugin. OpenAI stores an encrypted version of the client secret. Every request made from ChatGPT will include the client secret in the Authorization
header of the request. You can check that this client secret matches the one provided by OpenAI.
Note that service-level authentication grants access to your plugin API for all ChatGPT users. Your client secret is a single point of failure in this instance. If it leaks, you run the risk of outside users accessing your API. You should keep the client secret private, and rotate secrets periodically.
In some cases, you might want to further restrict access to only registered users of your application. In that case, you should implement an OAuth server for authenticating registered users from within the plugin.
Rate limiting is a strategy for limiting network traffic by capping how often someone can repeat an action within a certain timeframe. You should implement rate limits on top of your plugin API to prevent your servers from getting overloaded and degrading the performance of your plugin.
There are several strategies you can take for implementing rate limiting. You may choose to enforce a service-level rate limit. For example, you might choose to have a hard cap on the total number of requests your plugin API will accept from ChatGPT in a given timeframe. While a service-level rate limit is simple, it has some drawbacks. One downside of this approach is that if you experience an influx of users making requests at the same time, you risk degrading performance for all users.
A better approach is to rate limit at the user level. Even if you’ve only implemented service-level authentication, you can still implement rate limiting at the user level. But how’s that possible if you don’t have access to user specific information? Well, ChatGPT sends 2 headers with every request that can be used to differentiate between plugin users. These are:
openai-ephemeral-user-id
openai-conversation-id
Both of these headers are UUIDs which you can use to implement rate limiting per user or per conversation. For actually implementing the rate limiting algorithm, Redis is commonly used, and there are a number of resources for rate limiting using Redis.
The final step you should take in securing your application is implementing some sort of auditing mechanism for inspecting requests made to your application from the ChatGPT UI. If you’ve implemented user-level authentication, you should tie requests to users so you can track potentially malicious behavior. Even if you don’t have user-level authentication implemented, you shoud log the aforementioned headers OpenAI passes alongside each request. These can be used later to potentially identify and block malicious users. If you find a user is abusing your plugin, you can point OpenAI to specific information (e.g. conversation and user) to take appropriate action.
Securing your plugin APIs is the most important step you’ll take when deploying your plugin. In this post, you learned the minimum steps you should take before deploying. If you have any questions about securely deploying plugins, or developing plugins in general, feel free to reach out. Until next time!
Get updates on new content, exclusive offers, and exclusive materials by subscribing to our newsletter.