Web Crawler APIs
abstract
Web Crawler APIs are used to navigate to all pages of a website to check if the links are working.
_crawlWebsite
Since: | Sahi Pro | Desktop Add-On | Mobile Add-On | SAP Add-On | AI Assist Add-On |
9.0.0 | NA | NA | NA | NA |
Available for modes: Browser
_crawlWebsite($websiteURL, $depth, $csvOutputFilePath)
Arguments
$websiteURL | string | URL of the website to crawl |
$depth | integer | Specify how deep Sahi should navigate in each link |
$csvOutputFilePath | string | Path of output csv file.
Path of output csv file. Relative path resolves relative to files folder of the current project. |
Returns
null |
Sahi Pro Flowcharts Action :Crawl Website
Details
This api is used for testing links on a website. Sahi will navigate to all the links and store the information in a csv file. This csv file contains the testpage, number of links found and any error found on each link (errors: Javascript error on link click or network error).
This api is used for testing links on a website. Sahi will navigate to all the links and store the information in a csv file. This csv file contains the testpage, number of links found and any error found on each link (errors: Javascript error on link click or network error).
// Verifies all the links present in sahitest.com
_crawlWebsite("http://sahitest.com", 1, "output1.csv");
// Verifies all the links present in sahitest.com and also verifies all the links present in each link of sahitest.com
_crawlWebsite("http://sahitest.com", 2, "output2.csv");
// Use _artifact API to save a copy of the output file and access it from Sahi Reports page.
_artifact("output1.csv");