The FirstQuadrant API implements rate limiting to ensure fair usage and maintain service reliability for all users. This guide explains our rate limits and how to handle them gracefully.
Current rate limits
Authentication endpoints
Authentication endpoints have stricter rate limits for security:
Endpoint Limit Window /v5/auth
30 requests 1 minute /v5/auth/refresh
30 requests 1 minute /v5/auth/login
30 requests 1 minute
Standard API endpoints
Currently, standard API endpoints do not have publicly-available rate limits. We recommend implementing rate limit handling in your code to ensure compatibility with future updates.
Rate limit response
When you exceed the rate limit, you’ll receive a 429 Too Many Requests
response:
{
"code" : "rate_limited" ,
"status" : 429 ,
"message" : "Too many requests" ,
"description" : "You have exceeded the rate limit. Please wait before making more requests."
}
Handling rate limits
Exponential backoff
The recommended approach is to implement exponential backoff with jitter:
JavaScript
Python
TypeScript
async function makeRequestWithRetry ( url , options , maxRetries = 3 ) {
for ( let attempt = 0 ; attempt <= maxRetries ; attempt ++ ) {
try {
const response = await fetch ( url , options );
if ( response . status === 429 ) {
if ( attempt === maxRetries ) {
throw new Error ( "Rate limit exceeded after maximum retries" );
}
// Exponential backoff with jitter
const baseDelay = Math . pow ( 2 , attempt ) * 1000 ; // 1s, 2s, 4s
const jitter = Math . random () * 1000 ; // 0-1s random jitter
const delay = baseDelay + jitter ;
console . log ( `Rate limited. Retrying in ${ delay } ms...` );
await new Promise (( resolve ) => setTimeout ( resolve , delay ));
continue ;
}
return response ;
} catch ( error ) {
if ( attempt === maxRetries ) throw error ;
}
}
}
// Usage
const response = await makeRequestWithRetry ( "https://api.us.firstquadrant.ai/v5/contacts" , {
headers: {
Authorization: "Bearer YOUR_API_KEY" ,
"FirstQuadrant-Organization-ID" : "org_YOUR_ORG_ID" ,
},
});
Best practices
1. Implement retry logic
Always implement retry logic with exponential backoff:
// Good: Exponential backoff with jitter
const delay = Math . pow ( 2 , attempt ) * 1000 + Math . random () * 1000 ;
// Bad: Fixed delay
const delay = 1000 ;
// Bad: No retry logic
if ( response . status === 429 ) throw new Error ( "Rate limited" );
2. Queue requests
For high-volume applications, implement a request queue:
class RequestQueue {
constructor ( maxConcurrent = 5 , minDelay = 100 ) {
this . queue = [];
this . active = 0 ;
this . maxConcurrent = maxConcurrent ;
this . minDelay = minDelay ;
this . lastRequestTime = 0 ;
}
async add ( requestFn ) {
return new Promise (( resolve , reject ) => {
this . queue . push ({ requestFn , resolve , reject });
this . process ();
});
}
async process () {
if ( this . active >= this . maxConcurrent || this . queue . length === 0 ) {
return ;
}
// Ensure minimum delay between requests
const now = Date . now ();
const timeSinceLastRequest = now - this . lastRequestTime ;
if ( timeSinceLastRequest < this . minDelay ) {
setTimeout (() => this . process (), this . minDelay - timeSinceLastRequest );
return ;
}
const { requestFn , resolve , reject } = this . queue . shift ();
this . active ++ ;
this . lastRequestTime = Date . now ();
try {
const result = await requestFn ();
resolve ( result );
} catch ( error ) {
reject ( error );
} finally {
this . active -- ;
this . process ();
}
}
}
// Usage
const queue = new RequestQueue ( 5 , 200 ); // Max 5 concurrent, 200ms between requests
async function fetchAllContacts () {
const pagePromises = [];
for ( let page = 1 ; page <= 10 ; page ++ ) {
pagePromises . push ( queue . add (() => fetch ( `/v5/contacts?page= ${ page } ` , { headers })));
}
return Promise . all ( pagePromises );
}
3. Monitor rate limit usage
Track your API usage to avoid hitting limits:
class RateLimitMonitor {
constructor ( limit , window ) {
this . limit = limit ;
this . window = window ;
this . requests = [];
}
canMakeRequest () {
const now = Date . now ();
const windowStart = now - this . window ;
// Remove old requests outside the window
this . requests = this . requests . filter (( time ) => time > windowStart );
return this . requests . length < this . limit ;
}
recordRequest () {
this . requests . push ( Date . now ());
}
async waitForSlot () {
if ( this . canMakeRequest ()) return ;
const oldestRequest = this . requests [ 0 ];
const waitTime = this . window - ( Date . now () - oldestRequest ) + 100 ;
console . log ( `Rate limit approaching. Waiting ${ waitTime } ms...` );
await new Promise (( resolve ) => setTimeout ( resolve , waitTime ));
}
}
// Usage for auth endpoints (30 req/min)
const authLimiter = new RateLimitMonitor ( 30 , 60000 );
async function authenticate () {
await authLimiter . waitForSlot ();
const response = await fetch ( "/v5/auth" , {
method: "POST" ,
headers: { "Content-Type" : "application/json" },
body: JSON . stringify ({ token: refreshToken }),
});
authLimiter . recordRequest ();
return response ;
}
4. Batch operations
Reduce API calls by batching operations where possible:
// Instead of individual requests
for ( const contact of contacts ) {
await updateContact ( contact . id , contact . data ); // ❌ Many requests
}
// Batch updates in groups
const batchSize = 50 ;
for ( let i = 0 ; i < contacts . length ; i += batchSize ) {
const batch = contacts . slice ( i , i + batchSize );
// Process batch together
await processBatch ( batch ); // ✅ Fewer requests
}
5. Cache responses
Implement caching to reduce redundant API calls:
class APICache {
constructor ( ttl = 300000 ) {
// 5 minutes default
this . cache = new Map ();
this . ttl = ttl ;
}
get ( key ) {
const item = this . cache . get ( key );
if ( ! item ) return null ;
if ( Date . now () > item . expiry ) {
this . cache . delete ( key );
return null ;
}
return item . value ;
}
set ( key , value ) {
this . cache . set ( key , {
value ,
expiry: Date . now () + this . ttl ,
});
}
async fetch ( key , fetchFn ) {
const cached = this . get ( key );
if ( cached ) return cached ;
const value = await fetchFn ();
this . set ( key , value );
return value ;
}
}
// Usage
const cache = new APICache ();
async function getContact ( id ) {
return cache . fetch ( `contact: ${ id } ` , async () => {
const response = await fetch ( `/v5/contacts/ ${ id } ` , { headers });
return response . json ();
});
}
Error recovery strategies
Circuit breaker pattern
Implement a circuit breaker to prevent cascading failures:
class CircuitBreaker {
constructor ( threshold = 5 , timeout = 60000 ) {
this . failureCount = 0 ;
this . threshold = threshold ;
this . timeout = timeout ;
this . state = "CLOSED" ; // CLOSED, OPEN, HALF_OPEN
this . nextAttempt = Date . now ();
}
async execute ( requestFn ) {
if ( this . state === "OPEN" ) {
if ( Date . now () < this . nextAttempt ) {
throw new Error ( "Circuit breaker is OPEN" );
}
this . state = "HALF_OPEN" ;
}
try {
const result = await requestFn ();
this . onSuccess ();
return result ;
} catch ( error ) {
this . onFailure ();
throw error ;
}
}
onSuccess () {
this . failureCount = 0 ;
this . state = "CLOSED" ;
}
onFailure () {
this . failureCount ++ ;
if ( this . failureCount >= this . threshold ) {
this . state = "OPEN" ;
this . nextAttempt = Date . now () + this . timeout ;
console . log ( `Circuit breaker opened. Retry after ${ new Date ( this . nextAttempt ) } ` );
}
}
}
// Usage
const breaker = new CircuitBreaker ();
async function makeAPICall () {
return breaker . execute ( async () => {
const response = await fetch ( "/v5/contacts" , { headers });
if ( response . status === 429 ) {
throw new Error ( "Rate limited" );
}
return response . json ();
});
}
Testing rate limits
When developing, test your rate limit handling:
// Simulate rate limit scenarios
async function testRateLimitHandling () {
const requests = [];
// Make rapid requests to trigger rate limit
for ( let i = 0 ; i < 40 ; i ++ ) {
requests . push (
makeRequestWithRetry ( "/v5/auth" , {
method: "POST" ,
headers: {
"Content-Type" : "application/json" ,
},
body: JSON . stringify ({ test: true }),
}),
);
}
try {
const results = await Promise . allSettled ( requests );
const successful = results . filter (( r ) => r . status === "fulfilled" ). length ;
const failed = results . filter (( r ) => r . status === "rejected" ). length ;
console . log ( `Successful: ${ successful } , Failed: ${ failed } ` );
} catch ( error ) {
console . error ( "Test failed:" , error );
}
}
Future considerations
While most endpoints currently don’t have publicly-available rate limits, this may change. Design your integration to:
Handle 429 responses gracefully even on endpoints without current limits
Monitor response headers for future rate limit information
Implement configurable delays between requests
Use pagination to reduce the number of requests
Cache data where appropriate
By following these practices, your integration will continue to work smoothly as the API evolves.