1. Purpose
To ensure the stability and optimal performance of the Eliq Data Management and Insights APIs, we kindly ask clients to follow the usage guidelines outlined in this document. These guidelines are designed to safeguard a consistent, reliable, and secure experience for everyone across our platform.
Eliq continuously monitors API usage to proactively identify and address potential issues. If unusual activity is detected, we will reach out to your team to review the situation together, determine the root cause, and agree on corrective measures. We ask that clients engage with us within 5 business days of initial contact and implement agreed actions within 10 business days, unless otherwise mutually agreed.
If an issue remains unresolved or recurs without effective mitigation, Eliq may need to apply additional charges to cover operational impact. In rare and critical cases, where ongoing misuse poses a significant risk to the platform temporary access restrictions or termination of the service agreement may be considered. Such measures are always a last resort, taken only after all collaborative options have been fully explored.
2. Scope
These rules apply to all consumers of the APIs, including Eliq customers, partners, internal teams, and third-party integrators.
3. Rule Types
- Requirements
These rules must be followed. Failure to comply may result in operational interventions or, where necessary, contractual consequences. - Recommendations
These guidelines should be followed. While not mandatory, they help improve performance, stability, and resilience. In cases where a lack of adherence leads to misuse that impacts the wider platform, enforcement actions may still be applied.
4. API Usage Requirements
- Batch endpoints (e.g.,
/jobs
) must be used instead of multiple single requests when loading data via the Data Management API in production. - Documented format requirements (e.g.,
ndjson
, JSON
) must be adhered to. - For upsert/delete/export jobs, file naming and file size limits must be respected; unintentional filename reuse should be avoided.
- HTTP status codes and error response bodies must be inspected. Requests must not retried blindly upon error; errors must be handled according to their type:
- 4xx (Client error): must be reviewed and corrected before retries.
- 5xx (Server error): exponential backoff must be applied when 5xx errors are encountered.
- Repeated retries of failing or malformed requests must be avoided.
- Duplicate resources must not be created (e.g., re-post the same job file under a different name to force reprocessing).
- User consents must only be updated following an explicit user action.
- Client-side timeouts (suggested: 30 seconds for standard API calls) must be set.
- Compromised tokens must be revoked immediately.
- Secrets (API keys, Client Secrets or tokens) must not be exposed in public code or repositories.
- TLS (HTTPS) must always be used for API calls.
- Only the data required for the specific service, in line with privacy laws, must be accessed.
- Load or penetration testing must not be performed without prior written permission.
- Data must not be reverse-engineered, scraped, or replicated at scale beyond the scope of intended usage.
- Agreed-upon product limitation of the APIs must not be circumvented (e.g., unauthorized data exports).
- Deprecated endpoints and fields must be migrated away from within 180 days of notice (Note: any endpoint may be changed, without notice, if required to address a critical security issue).
5. API Usage Recommendations
- Only
/health
or /heartbeat
endpoints should be used for availability checks, not functional endpoints like GET /user
or GET /location
. - Polling should only be performed when necessary, and interval guidelines should be followed.
- Status endpoints should not be polled more than once every 5 seconds.
- More than 1000 requests per minute should not be sent to any single endpoint, unless explicitly agreed otherwise in writing.
- Insights API resources (e.g., user/location) should not be polled more than once every 60 seconds, unless recently updated.
- Datapoints already loaded in the Eliq data platform should not be re-ingested unless they were erroneous.
- Static or infrequently changing data (e.g., user/location metadata) should be cached for at least 24 hours.
- Malformed, empty, or invalid files should not be submitted to batch jobs.
- Request ID (
X-Request-Id
/ transaction_id
) returned in headers or response body should be logged and stored when a response is used in a support request. - Unnecessary jobs should be cancelled via
/jobs/{jobId}/cancel
rather than allowing unnecessary processing.
6. Enforcement and Remediation
Handling of Violations
- Engagement: Clients are expected to respond within 5 business days of Eliq’s initial contact.
- Corrective Action: Agreed measures should be implemented within 10 business days, unless an alternative timeline is mutually agreed.
- Escalation: If violations are repeated or remain unresolved, Eliq may apply:
- Operational surcharges
- Temporary access restrictions
- Termination of the service agreement (reserved for critical, high-risk scenarios and always as a last resort)