A powerful and customizable sitemap generator written in PHP (PHP 8+).
It crawls one or multiple domains, respects robots.txt
, follows meta directives, supports resumable sessions, sends logs by email, and can even notify search engines when the sitemap is ready.
It is optimized for large websites and offers advanced crawl controls, meta/robots filtering, JSON/HTML export, and more.
- π Multi-domain support (comma-separated URLs)
- π Combined sitemap for all domains
- π Automatically creates multiple sitemap files if more than 50,000 URLs are found
- π§ Crawling depth control
- π
robots.txt
and<meta name="robots">
handling - π Resumable crawl via cache (optional)
- π£
--resetcache
to force full crawl (new!) - π£
--resetlog
to delete old log files (new!) - π§ Dynamic priority & changefreq rules (via config or patterns)
- π§Ή Pretty or single-line XML output
- π¦ GZIP compression (optional)
- π§ Log by email
- π Health check report
- π‘ Ping Google/Bing/Yandex
- π§ͺ Debug mode with detailed logs
- π Structured logging with timestamps and levels (
info
,error
,debug
, etc.) - π Log export: JSON format + HTML report (
logs/crawl_log.json
,logs/crawl_log.html
) - π Visual crawl map generation (
crawl_graph.json
,crawl_map.html
) - π§ Flattened email reports with attached crawl logs
- π§ Customizable sender email via
--from
- π Public base URL for sitemap/map references via
--publicbase
- PHP 8.0 or newer
curl
anddom
extensions enabled- Write permissions to the script folder (for logs/cache/sitemaps)
php sitemap.php \
--url=https://yourdomain.com,https://blog.yourdomain.com \
--key=YOUR_SECRET_KEY \
[options]
sitemap.php?url=https://yourdomain.com&key=YOUR_SECRET_KEY&gzip&prettyxml
Option | Description |
---|---|
--url= |
Comma-separated domain list to crawl (required) |
--key= |
Secret key to authorize script execution (required) |
--output= |
Output path for the sitemap file |
--depth= |
Max crawl depth (default: 3) |
--gzip |
Export sitemap as .gz |
--prettyxml |
Human-readable XML output |
--resume |
Resume from last crawl using cache/visited.json |
--resetcache |
Force fresh crawl by deleting the cache (NEW) |
--resetlog |
Clear previous crawl logs before start (NEW) |
--filters |
Enable external filtering from filter_config.json |
--graph |
Export visual crawl map (JSON + interactive HTML) |
--priorityrules |
Enable dynamic <priority> based on URL patterns |
--changefreqrules |
Enable dynamic <changefreq> based on URL patterns |
--ignoremeta |
Ignore <meta name="robots"> directives |
--respectrobots |
Obey rules in robots.txt |
--email= |
Send crawl log to this email |
--ping |
Notify search engines after sitemap generation ( |
--threads= |
Number of concurrent crawl threads (default: 10) |
--agent= |
Set a custom User-Agent |
--splitbysite |
Generate one sitemap per domain and build sitemap_index.xml to link them |
--graphmap |
Generate crawl map as JSON and interactive HTML |
--publicbase= |
Public base URL for HTML links (e.g., https://example.com/sitemaps) |
--from= |
Sender address for email reports |
--debug |
Output detailed log info for debugging |
sitemap.xml
orsitemap-*.xml
sitemap.xml.gz
(optional)sitemap_index.xml
(if split)cache/visited.json
β stores crawl progress (used with--resume
)logs/crawl_log.txt
β full crawl loglogs/crawl_log.json
β Structured log as JSONlogs/crawl_log.html
β Visual HTML report of the crawl loglogs/crawl_report_*.txt
β emailed attachmentlogs/health_report.txt
β summary of crawl (errors, speed, blocks)crawl_graph.json
β Graph structure for visualizationcrawl_map.html
β Interactive crawl map
Create a config/filter.json
to define your own include/exclude patterns and dynamic rules:
{
"excludeExtensions": ["jpg", "png", "zip", "docx"],
"excludePatterns": ["*/private/*", "debug"],
"includeOnlyPatterns": ["blog", "news"],
"priorityPatterns": {
"high": ["blog", "news"],
"low": ["impressum", "privacy"]
},
"changefreqPatterns": {
"daily": ["blog", "news{
"excludeExtensions": ["jpg", "png", "docx", "zip"],
"excludePatterns": [],
"includeOnlyPatterns": [],
"priorityPatterns": {
"high": [
"news",
"blog",
"offers"
],
"low": [
"terms-and-conditions",
"legal-notice",
"privacy-policy"
]
},
"changefreqPatterns": {
"daily": [
"news",
"blog",
"offers"
],
"monthly": [
"terms-and-conditions",
"legal-notice",
"privacy-policy"
]
}
}"],
"monthly": ["impressum", "agb"]
}
}
Activate with:
--filters --priorityrules --changefreqrules
With --ping
enabled, the script will notify:
- Yandex:
https://webmaster.yandex.com/ping
As of 2023/2024:
- β Google and Bing ping endpoints are deprecated (410 Gone)
- β
Use
robots.txt
with aSitemap:
entry - β Optionally submit in Webmaster Tools
The script requires a secret key (--key=
or key=
) to run.
Set it inside the script:
$authorized_hash = 'YOUR_SECRET_KEY';
Send crawl reports to your inbox with:
--email=you@yourdomain.com
Your server must support the mail()
function.
Enable --debug
to log everything:
- Pattern matches
- Skipped URLs
- Meta robots blocking
- Robots.txt interpretation
- Response times
- Log file resets
If more than 50,000 URLs are crawled (the limit of a single sitemap file per sitemaps.org spec),
the script will automatically create multiple sitemap files:
sitemap-1.xml
,sitemap-2.xml
, ...- Or
domain-a-1.xml
,domain-a-2.xml
, ... if--splitbysite
is active - These are automatically referenced from a
sitemap_index.xml
No configuration is needed β the split is automatic.
When using --splitbysite
, the crawler will:
- Create a separate sitemap file for each domain (e.g.,
/sitemaps/domain1.xml
,/sitemaps/domain2.xml
) - Automatically generate a
sitemap_index.xml
file in the root directory - Ping search engines (Google, Bing, Yandex) with the
sitemap_index.xml
URL instead of individual sitemap files
This is useful when crawling multiple domains in a single run.
If you enable --graph
, the crawler will export:
graph.json
β link structure as raw datacrawl_map.html
β interactive map powered by D3.js
You can explore your site structure visually, zoom in/out, drag nodes, and inspect links. Useful for spotting crawl traps, dead ends, and structure gaps.
π Tip: For large sites, open the HTML file in Chrome or Firefox.
User-agent: *
Disallow:
Sitemap: https://yourdomain.com/sitemap.xml
MIT License
Feel free to fork, modify, or contribute!
Built by Gilles Dumont (Qiubits SARL)
Contributions and feedback welcome.