Description meta tags are important because Google might use them as snippets for your pages. Note that we say "might" because Google may choose to use a relevant section of your page's visible text if it does a good job of matching up with a user's query. Adding description meta tags to each of your pages is always a good practice in case Google cannot find a good selection of text to use in the snippet. The Webmaster Central Blog has informative posts on improving snippets with better description meta tags18 and better snippets for your users19. We also have a handy Help Center article on how to create good titles and snippets20.


23snaps Amikumu aNobii AsianAve Ask.fm Badoo Cloob Cyworld Diaspora Draugiem.lv Ello Facebook Foursquare Gab Hello Hi5 Highlight Houseparty Idka Instagram IGTV IRC-Galleria Keek LiveJournal Lifeknot LockerDome Marco Polo Mastodon MeetMe Meetup Miaopai micro.blog Minds MixBit Mixi Myspace My World Nasza-klasa.pl Nextdoor OK.ru Path Peach Periscope Pinterest Pixnet Plurk Qzone Readgeek Renren Sina Weibo Slidely Snapchat SNOW Spaces Streetlife StudiVZ Swarm Tagged Taringa! Tea Party Community TikTok Tinder Tout Tuenti TV Time Tumblr Twitter Untappd Vero VK Whisper Xanga Yo
Snapchat is a popular messaging and picture exchanging application that was created in 2011 by three students at Stanford University named Evan Spiegel, Bobby Murphy, and Reggie Brown. The application was first developed to allow users to message back and forth and to also send photographs that are only available from 1–10 seconds until they are no longer available. The app was an instant hit with social media members and today there are up to 158 million people using snapchat every single day.[60] It is also estimated that Snapchat users are opening the application approximately 18 times per day, which means users are on the app for about 25–30 minutes per day.[60]
Webmasters and content providers began optimizing websites for search engines in the mid-1990s, as the first search engines were cataloging the early Web. Initially, all webmasters only needed to submit the address of a page, or URL, to the various engines which would send a "spider" to "crawl" that page, extract links to other pages from it, and return information found on the page to be indexed.[5] The process involves a search engine spider downloading a page and storing it on the search engine's own server. A second program, known as an indexer, extracts information about the page, such as the words it contains, where they are located, and any weight for specific words, as well as all links the page contains. All of this information is then placed into a scheduler for crawling at a later date.
×