Comments 9
Вот бы еще описали, как это к express.js прикрутить — было бы вообще круто.
Про prerender.io в курсе, но хочется, так сказать, без сторонних сервисов.
Про prerender.io в курсе, но хочется, так сказать, без сторонних сервисов.
-2
Гугл уже умеет JS:
«We are no longer recommending the AJAX crawling proposal we made back in 2009.
In 2009, we made a proposal to make AJAX pages crawlable. Back then, our systems were not able to render and understand pages that use JavaScript to present content to users. Because «crawlers … [were] not able to see any content … created dynamically,» we proposed a set of practices that webmasters can follow in order to ensure that their AJAX-based applications are indexed by search engines.
Times have changed. Today, as long as you're not blocking Googlebot from crawling your JavaScript or CSS files, we are generally able to render and understand your web pages like modern browsers. To reflect this improvement, we recently updated our technical Webmaster Guidelines to recommend against disallowing Googlebot from crawling your site's CSS or JS files.»
https://webmasters.googleblog.com/2015/10/deprecating-our-ajax-crawling-scheme.html
«We are no longer recommending the AJAX crawling proposal we made back in 2009.
In 2009, we made a proposal to make AJAX pages crawlable. Back then, our systems were not able to render and understand pages that use JavaScript to present content to users. Because «crawlers … [were] not able to see any content … created dynamically,» we proposed a set of practices that webmasters can follow in order to ensure that their AJAX-based applications are indexed by search engines.
Times have changed. Today, as long as you're not blocking Googlebot from crawling your JavaScript or CSS files, we are generally able to render and understand your web pages like modern browsers. To reflect this improvement, we recently updated our technical Webmaster Guidelines to recommend against disallowing Googlebot from crawling your site's CSS or JS files.»
https://webmasters.googleblog.com/2015/10/deprecating-our-ajax-crawling-scheme.html
0
Где-то на хабре была статья, где человек экспериментировал с этим «умеет ajax». В итоге откатился обратно, т.к. умения ajax довольно скудные и позиции в выдаче падают.
0
Это всё хорошо, но у меня сайт забирает JSON данные 3-5 мб, я пробывал в google search console, посмотреть как это видит Google-bot в итоге он некорректно отображает, а с помощью снимков всё замечательно!
0
if (isset($_GET['_escaped_fragment_'])) { if ($_GET['_escaped_fragment_'] != ''){ $val = $_GET['_escaped_fragment_']; include_once "snapshots" . $val . '/index.html'; }else{ $url = "https://" . $_SERVER["HTTP_HOST"] . $_SERVER["REQUEST_URI"]; $arrUrl = parse_url($url); $val = $arrUrl['path']; include_once "snapshots" . $val . '/index.html'; }
Отдавать статичный html через php, да еще через include — то еще извращение.
Пусть этим занимается nginx:
location / {
if ( $args ~ "_escaped_fragment_=(.+)" ) {
set $real_url $1;
rewrite ^ /snapshots/$real_url.html? break;
}
try_files $uri $uri/ =404;
}
0
Sign up to leave a comment.
CRAWL динамических страниц для Google и Яндекс поисковиков (snapshots, _escaped_fragment_, ajax, fragment)