When using a browser to make requests, some URLs have the tendency to have long timeouts, because some esoteric part of the DOM is still loading. For that reason, Shifter's Scraping API returns all the HTML that could be gathered until the timeout was triggered.
The example on this page demonstrates how to force a timeout after timeout=3000 3 seconds. The maximum value that can be set for this parameter is 60000.
Forcing Timeouts examples
You can specify the maximum allowed time that the engine is allowed to render, by passing the timeout parameter.
GET https://scrape.shifter.io/v1?api_key=api_key&url=https://httpbin.org/get&render_js=1&timeout=200
⇡ Input
curl --request GET --url "https://scrape.shifter.io/v1?api_key=api_key&url=https%3A%2F%2Fhttpbin.org%2Fget&render_js=1&timeout=200"
var client = new RestClient("https://scrape.shifter.io/v1?api_key=api_key&url=https%3A%2F%2Fhttpbin.org%2Fget&render_js=1&timeout=200");
var request = new RestRequest(Method.GET);
IRestResponse response = client.Execute(request);
{
"status": "Failure",
"status_code": 422,
"created_at": "2022-04-26T11:57:23.242Z",
"processed_at": "2022-04-26T11:57:23.739Z",
"time_taken": {
"total": 0.701,
"scraping": 0.202,
"setup_worker": 0.403
},
"error": "The target page took more than 0.2 seconds to load, the website might be down. Retry the request or increase the value of 'timeout' parameter.",
"page_content": null
}