Toxicity Verify API (1.1.0)

Download OpenAPI specification:

Classify

✨ Classify text as SPAM or NOSPAM.

Request Body schema: application/json
required
text
required
string (Text)
Threshold (number) or Threshold (null) (Threshold)
Any of
number (Threshold)

Responses

Response Schema: application/json
property name*
additional property
any

Request samples

Content type
application/json
{
  • "text": "string",
  • "threshold": 0
}

Response samples

Content type
application/json
{ }

Health

✨ Unified health check for all models. Returns status for spam classifier, toxicity classifier, and system info.

Responses

Response Schema: application/json
property name*
additional property
any

Response samples

Content type
application/json
{ }

Metrics

✨ Expose basic metrics about the application.

Note: If OTEL_PROMETHEUS_ENABLED is True, Prometheus metrics are available on a separate port (default: 8001) at the standard /metrics endpoint.

Responses

Response Schema: application/json
any

Response samples

Content type
application/json
null

Toxicity Check

✨ Classify a single text for toxicity. Uses thread offload to avoid blocking the event loop.

Request Body schema: application/json
required
text
required
string (Text)
threshold
number (Threshold)
Default: 0.5

Responses

Response Schema: application/json
verdict
required
string (Verdict)
toxic
required
number (Toxic)
neutral
required
number (Neutral)
raw
required
Array of any (Raw)

Request samples

Content type
application/json
{
  • "text": "string",
  • "threshold": 0.5
}

Response samples

Content type
application/json
{
  • "verdict": "string",
  • "toxic": 0,
  • "neutral": 0,
  • "raw": [
    • null
    ]
}

Toxicity Check Batch

✨ Classify multiple texts for toxicity in one call. Uses thread offload to avoid blocking the event loop.

Request Body schema: application/json
required
texts
required
Array of strings (Texts)
threshold
number (Threshold)
Default: 0.5

Responses

Response Schema: application/json
Array
verdict
required
string (Verdict)
toxic
required
number (Toxic)
neutral
required
number (Neutral)
raw
required
Array of any (Raw)

Request samples

Content type
application/json
{
  • "texts": [
    • "string"
    ],
  • "threshold": 0.5
}

Response samples

Content type
application/json
[
  • {
    • "verdict": "string",
    • "toxic": 0,
    • "neutral": 0,
    • "raw": [
      • null
      ]
    }
]

Check All

✨ Perform both spam and toxicity checks simultaneously.

This endpoint runs both classifiers in parallel and returns a combined result, which is more efficient than making two separate API calls.

Args: payload: Contains text and optional thresholds for both checks

Returns: Combined result with spam detection, toxicity detection, and overall verdict

Request Body schema: application/json
required
text
required
string (Text)
Spam Threshold (number) or Spam Threshold (null) (Spam Threshold)
Any of
number (Spam Threshold)
toxicity_threshold
number (Toxicity Threshold)
Default: 0.5

Responses

Response Schema: application/json
text
required
string (Text)
required
object (Spam)
property name*
additional property
any
required
object (Toxicity)
property name*
additional property
any
is_safe
required
boolean (Is Safe)
verdict
required
string (Verdict)

Request samples

Content type
application/json
{
  • "text": "string",
  • "spam_threshold": 0,
  • "toxicity_threshold": 0.5
}

Response samples

Content type
application/json
{
  • "text": "string",
  • "spam": { },
  • "toxicity": { },
  • "is_safe": true,
  • "verdict": "string"
}