DeployVerdict
"Can this deploy safely?"
Send your post-deploy metrics, receive a SAFE, WARNING or STOP verdict.
Introduction
DeployVerdict is a defensive API that answers one simple question: "Can this deploy safely?". It returns a deterministic verdict — same input, same output. Always. Zero LLM, stateless, designed to integrate into your automated pipelines.
Input
Send a POST request with a JSON payload. See the example on the right.
Output
The response contains a verdict and explanatory signals. See the example on the right.
Verdicts
| Verdict | Meaning |
|---|---|
| SAFE | Deployment can continue safely. |
| WARNING | Monitoring recommended, degraded metrics. |
| STOP | Rollback recommended, critical metrics. |
Signals
Each response includes a signals object that details the reasons for the verdict. These signals are deterministic and reproducible.
What the API does NOT do
- ❌Does not execute automatic rollback
- ❌Does not collect metrics
- ❌Does not replace an observability system
Request
{
"deploy_id": "deploy_abc123",
"metrics": {
"error_rate": 0.02,
"latency_p99": 245,
"cpu_usage": 0.65,
"memory_usage": 0.72
},
"baseline": {
"error_rate": 0.01,
"latency_p99": 200
}
}Response200 OK
{
"verdict": "WARNING",
"signals": {
"error_rate_delta": "+100%",
"latency_degradation": "moderate",
"resource_pressure": "acceptable"
},
"recommendation": "Monitor closely for 15 minutes"
}Ready to integrate DeployVerdict?
Available on RapidAPI. Integration in 5 minutes.
View on RapidAPIarrow_forward