Web Crawler APIs

abstract Web Crawler APIs are used to navigate to all pages of a website to check if the links are working.

Crawl Website

This api is used for testing links on a website. Sahi will navigate to all the links and store the information in a csv file. This csv file contains the testpage, number of links found and any error found on each link (errors: Javascript error on link click or network error).

Parameters
$websiteURLstring URL of the website to crawl
$depthinteger Specify how deep Sahi should navigate in each link
$csvOutputFilePathstring
Path of output csv file.
Path of output csv file. Relative path resolves relative to files folder of the current project.
Return Value

Modes Supported :
Raw Script
// Verifies all the links present in sahitest.com
_crawlWebsite("http://sahitest.com", 1, "output1.csv");

// Verifies all the links present in sahitest.com and also verifies all the links present in each link of sahitest.com
_crawlWebsite("http://sahitest.com", 2, "output2.csv");

// Use _artifact API to save a copy of the output file and access it from Sahi Reports page.
_artifact("output1.csv");

Sahi Pro Classic API :_crawlWebsite