Import Features from an ESRI REST service via iShare Workflow
Overview
To import features into iShare from an ESRI REST service we need to create a Workflow Job and, in the case where the number of records exceeds
the maximum number
configured by the server administrator, we need to set up two separate Workflow Tasks. To clarify we are going to use an example, of an ESRI REST service to query Schools gates.
This guide will take you through:
Check Maximum Record Count
By checking how the Layer we want to query is configured we can see that the maximum record count
is set up to be equal to 1000 records.
This means that, if the dataset contains more records than 1000 records, we need to configure the resultOffset
and resultRecordCount
to page through the query results.
Example Queries
Query Parameters
Parameter | Details |
---|---|
orderByFields | This parameter specifies one or more field names or expressions to be use to order the features or records. Use When you pass in one of the following two parameters and |
resultOffset/resultRecordCount | These options can be used for fetching query results by skipping the specified number of records and starting from the next record (for example, You can use these options to fetch records that are beyond |
To explain the use of the above parameters, we will demonstrate some example queries and then show you how to use these queries, via a Workflow Job, to import the results into iShare.
Example 1
If we create the query leaving the
resultOffset
and theresultRecordCount
parameters empty:we can see at the end of the page that the parameter
exceededTransferLimit
is equal to true:this indicates that there are more query results and we can continue to page through them. When
exceededTransferLimit
isfalse
, it indicates that you have reached the end of the query results.
Example 2
To fetch the first 1000 records we need to set the
resultOffset=0
,theresultRecordCount=1000
, and theorderByFields=OBJECTID+ASC
:this query will produce the first 1000 records (1-1000)
To fetch the remaining records we need to set the parameters as follows:
resultOffset=1000
, because themaxRecordCount
for this layer is configured to be 1000.resultRecordCount=1000
, to fetch the rest of the records.
Important, if you know the number of the records remaining you can add that to theresultRecordCount
parameter instead of 1000. We use the number 1000 because we don’t know the exact number of the records remaining.orderByFields=OBJECTID+ASC
is used for ordering the results.in this case, there were 72 records remaining.
Import Features from an ESRI REST service into your iShare database
To import the features into iShare we need to set up a Workflow Job. As we mentioned previously, in the case where the number of records exceeds
the maximum number
configured by the server administrator, we need to set up two separate Workflow Tasks. In this case, we know that number of records exceeds
the maximum number
so we need to set up two Workflows to import the correct amount of the features:
Create a Job → ESRI REST Service
Right click on the Job and select Create Task > Spatial Data Transformation to fetch the first 1000 records. Rules to remember:
Name → School Gates
Source Data → GeoJSON
Filename → the Query URL (ESRI REST service) using the
resultOffset=0
,resultRecordCount=1000
, and theorderByFields=OBJECTID+ASC
as we used in Example 2 point 1.Action → Overwrite layer
Create the second Spatial Data Transformation Task to fetch the remaining records. Rules to remember:
Name → School Gates Append extra records
Source Data → GeoJSON
Filename → the Query URL (ESRI REST service) using the
resultOffset=1000
,resultRecordCount=1000
, and theorderByFields=OBJECTID+ASC
as we saw in the Example 2 point 2.Action → Append to layer
In the Job you should see your two Tasks and you need to set the second Task to Make dependent on previous task completing successfully.
Save the Job.
Running the Job
Once you have finished creating the Tasks for your Job you can manually run the Job by either:
clicking the Run Job button
right clicking on the Job in the Explorer and selecting to Run Job...
Scheduling the Job
In order to keep your data up-to-date you would normally Schedule the Job to run at a specific time, say overnight.