This KBA provides information about Successfactors Odata API Recommended Usage and Best Practices
SAP Successfactors Odata API
ODATA API Best Practices
- Query only modified records
Instead of querying all records, query only the records that have been modified since your last execution for integration use cases, and use server pagination to ensure stable results. For more information on server pagination, see Pagination.
- Use server pagination for integration use cases
Compared with client pagination, server pagination is faster and does not suffer data loss and duplication issues when simultaneous edits occur in the same instance. You can choose either cursor-based or snapshot-based pagination for your integration use case. See Pagination for more information.
- Do not run jobs too often to get real-time results
Repeating queries in much less than one hour will consume too much API resource, which may be throttled in the future. In extreme cases, you may even be denied of service because of frequent queries, especially ones that require a lot of backend processing.
- Use batch or $filter to get multiple records
Instead of pulling many records one at a time using key predicate, use batch or the $filter IN clause.
In OData API, a user login session is created on the server for each request. This is a resource demanding process, especially when login audit is enabled. A high volume of requests may lead to excessive usage of system resource and drastically slow down performance. Therefore, we encourage you to use batch operation or the IN clause in $filter query option whenever possible. For example, instead of using the ToDo API to query multiple users with many requests, use the TodoEntryV2 which allows you to do it in one request.
See The $filter Keyword for more information.
- Reuse login sessions
Creating login session is a resource demanding task. Instead of creating a session for each HTTP transaction or each page of paginated data, reuse login sessions.
See Enabling Session Reuse for more information.
- Do not use an API for integration that is only designed for single user
APIs designed for single users should not be used for integration purpose by iterating requests for all uses. This will create one login session for each user and significantly slow down performance. One example is the ToDo API which only allows querying data of a single user. For integration purpose, use the new TodoEntryV2 API. TodoEntryV2 allows you to query items of multiple users with the OData API Todo Export permission.
See TodoEntryV2 for more information.
- Tune your batch requests into proper sizes
The OData API can return a maximum number of 1000 records in a single page. You should tune your batch sizes to be as large as possible. For more complex transactions, you may need to decrease the size to avoid HTTP timeouts.
- Avoid large $expand statements
The $expand statement can be used to request master-detail data. However, ssing $expand to join a large number of tables can lead to poor performance. We recommend that you first fetch master by batch and detail on demand when user requests.
For example, an attempt to query all JobRequesitions and then expand JobApplications with all attachments for each application is condiered too complex and large, which is error-prone and may be throttled in the future.
The recommend master-detail implementation would be:
- Bind the master list to simple top level Job Requisition so UI5 loads in on demand via batch operations as the user scrolls.
- When the user clicks on a job requisition, only then do you bind the detail list to candidate, filtering on the selected job requisition id.
- Only load resume and cover letter when you click on a job candidate.
Poor performance is often a sign of misuse of APIs. As a rule of thumb, always keep your transactions simple.
- Do not query properties and expand entities you do not need or use
It is easy to build a query that does more $select and $expand than you need. This is especially easy with the Boomi connector which has UI limitations due to limitations in the Dell Boomi connector toolkit.
As a developers, you might get tired of tweaking your queries to match the content as requirements change so often. You might be tempted to just pull more than you need and release the queries to production without tuning.
The Integration Center can help avoids this. Instead of a building a query manually based on the mapping of fields, the Integration Center generates the query based on the mapping.
- Tune your client wait time to match the system wait time
Your client should be set to wait a reasonable amount of time before timing out. Complex operations can take as long as 10 minutes and our network and servers will continue to process a transaction for that long. It is best to wait for the transaction to complete rather just wasting the transaction.
You can also tune your transaction to avoid timeouts.
- Implement your retry logic properly
Retry logic can help to recover transactions that failed due to internet connectivity or backend server issues, but retry must be done with care based on the type of HTTP error.
In all cases you must "sleep" a reasonable amount of time from 1 to 5 minutes before attempting a retry. "Obsessing" by immediately retrying may cause a denial of service condition and not give a server time to recover.
Only return a maximum number 5 of times before abandoning your task. For example the Boomi connector retries only a maximum of 5 times.
Error types eligible for retry
HTTP response code 5XX other than 500. A 500 error will not recover because it is due to a problem with your query or payload.
Note 503 Service Not Available may occur when a server is overloaded. It may also occur during maintenance periods. You should avoid running queries during these published maintenance periods.
HTTP response code 412 can occur to edit operations during optimistic locking on back-end transactions.
For batch operations, only retry the records that failed. Do not include successful edit operations in the retry payload. This includes specific transaction in a $batch and specific instances in an multi-record upsert payload.
For 412 errors, only retry a maximum of 1 time.
HTTP timeout errors can be retried but only if your client timeout time matches the SuccessFactors infrastructure timeout of 5 minutes. If you retry a timeout before this period you will be stacking transactions on top of each other and will be performing a possible denial of service attack.
HTTP Connection Reset error and no response is given from server side.
Errors for which retry should not be attempted
401 Authentication Failed error. You have most likely provided incorrect credentials and they won't be accepted the second time.
404 Not Found error. You will not be able to recover something that does not exist.
400 Bad Request error. This error can occur for edit operations that are missing required data or have incorrect formats.
Do not retry Create/POST or Upsert operations where the object has an auto-increment key. Both can result in duplicate data.
Note : Subscribe to SAP SuccessFactors HCM Suite OData API: Developer Guide. It is updated/refreshed whenever new feature and/or update for improving Odata API overall usability and performance is available and is live for customer or partner usage.
Performance, Pagination, Batch Size, Timeout, LastRunTime, Query, LastModified, LastModifiedDateTime, Odata API Session Reuse, Logic, Scheduling Frequency, Odata API, Developer Guide, Batch Upsert/Update, Boomi Wait Time, $expand, session reuse , KBA , LOD-SF-INT-ODATA , OData API Framework , LOD-SF-INT , SF Integrations - EC Payroll, Boomi/ HCI, API , How To