Discuz! Board

 找回密碼
 立即註冊
搜索
熱搜: 活動 交友 discuz
查看: 1|回復: 0

If you want google spider to wait 5 seconds after each

[複製鏈接]

1

主題

1

帖子

5

積分

新手上路

Rank: 1

積分
5
發表於 17:46:30 | 顯示全部樓層 |閱讀模式
Txt. Directives no longer supporte below are some commands that google no longer supports - partly for technical reasons they have never been supporte. Crawl-delay directive previously you could use this directive to specify the fetch interval in seconds. For example. crawl. Then you nee to set the crawl-delay directive to 5: user-agent: googlebot crawl-delay: 5 google no longer supports this command. But bing and yandex still support it. That being said. You nee to be careful when setting this directive. Especially if you have a large website. If you set the crawl-delay directive to 5. The spider can only crawl a maximum of 17.


280 urls per day. If you have millions of pages. This crawl volume is very small. On the other France WhatsApp Number Data  hand. If you have a small website. It can help you save bandwidth. Noindex directive this directive has never been supporte by google. But until recently. It was thought that google had some "Code for handling unsupporte and unpublishe rules (such as noindex). " so if you wish to prevent google from indexing all your blog pages. Then you can use this directive: user-agent: googlebot noindex: /blog/ but at the same time. On september 1. 2019. Google made it clear that this directive is not supporte . If you want to exclude a page from search engines.



Use the meta robots tag. Or the x-robots http header directive. Nofollow directive this command has never been officially supporte by google. It use to be use to prevent search engines from following a certain link or a special path. For example. If you want to block google from following all blog links. You can set the command like this: user-agent: googlebot nofollow: /blog/ google state on september 1. 2019 that this directive would not be supporte. If you want to prevent search engines from following all links on a page. Then you should use the meta robots tag. Or the x-robots http header directive. If you want to specify that a link should not be followe by google.

回復

使用道具 舉報

您需要登錄後才可以回帖 登錄 | 立即註冊

本版積分規則

Archiver|手機版|自動贊助|z

GMT+8, 17:41 , Processed in 0.566068 second(s), 19 queries .

抗攻擊 by GameHost X3.4

Copyright © 2001-2021, Tencent Cloud.

快速回復 返回頂部 返回列表
一粒米 | 中興米 | 論壇美工 | 設計 抗ddos | 天堂私服 | ddos | ddos | 防ddos | 防禦ddos | 防ddos主機 | 天堂美工 | 設計 防ddos主機 | 抗ddos主機 | 抗ddos | 抗ddos主機 | 抗攻擊論壇 | 天堂自動贊助 | 免費論壇 | 天堂私服 | 天堂123 | 台南清潔 | 天堂 | 天堂私服 | 免費論壇申請 | 抗ddos | 虛擬主機 | 實體主機 | vps | 網域註冊 | 抗攻擊遊戲主機 | ddos |