Usage
To create a client and login using an email/password combo, use the
ArkindexClient.login helper method:
from arkindex import ArkindexClient
cli = ArkindexClient()
cli.login('EMAIL', 'PASSWORD')
This helper method will save the authentication token in your API client, so that it is reused in later API requests.
If you already have an API token, you can create your client like so:
from arkindex import ArkindexClient
cli = ArkindexClient('YOUR_TOKEN')
Making requests
To perform a simple API request, you can use the request() method. The
method takes an operation ID as a name and the operation’s parameters
as keyword arguments.
You can open https://your.arkindex/api-docs/ to access the API
documentation, which will describe the available API endpoints,
including their operation ID and parameters.
corpus = cli.request('RetrieveCorpus', id='...')
The result will be a py dict containing the result of the API
request. If the request returns an error, an
arkindex.exceptions.ErrorResponse will be raised.
Dealing with pagination
The Arkindex client adds another helper method for paginated endpoints
that deals with pagination for you: ArkindexClient.paginate. This
method returns a ResponsePaginator instance, which is a classic py
iterator that does not perform any actual requests until absolutely
needed: that is, until the next page must be loaded.
for element in cli.paginate('ListElements', corpus=corpus['id']):
print(element['name'])
Using list on a ResponsePaginator may load dozens of pages at once
and cause a big load on the server.
You can use len to get the total item count before spamming a server.
|
A call to paginate may produce hundreds of sub-requests depending on
the size of the dataset you’re requesting. To accommodate with large
datasets, and support network or performance issues, paginate supports
a retries parameter to specify the number of sub-request it’s able to
run for each page in the dataset. By default, the method will retry 5
times per page.
You may want to allow paginate to fail on some pages, for really big
datasets (errors happen). In this case, you should use the optional
boolean parameter allow_missing_data (set to False by default).
Here is an example of pagination on a large dataset, allowing data loss, lowering retries and listing the missed pages:
elements = cli.paginate(
'ListProcessElements',
id='XXX',
retries=3,
allow_missing_data=True,
)
for element in elements:
print(element['id'])
print("Missing pages: {elements.missing}")
Using another server
By default, the API client is set to point to the main Arkindex server
at https://arkindex.teklia.com. If you need or want to use this API
client on another server, you can use the base_url keyword argument
when setting up your API client:
cli = ArkindexClient(base_url='https://somewhere')