Link Baiting
Link baiting is another popular way of promoting your site. If you produce a really popular unique post for your site, then other people may want to link to it. Perhaps you have copied/published another website’s content on your site, don’t forget to place their website link as a reference. Do it for others and, if your content is trustworthy, let others do it for you. This is another way to increase your link popularity.
______________________________ ______________________________ _________
Widget / Gadget Development
______________________________
Widget / Gadget Development
Develop some interactive and innovative widget/gadget applications (such as an online poll or game widgets) for your website and publish them on your blog/website or in other popular social networking sites like Facebook and Myspace. Let your friends and others vote/play/use the widget/application, which will help you increase your branding and website visits.
______________________________ ______________________________ _________
______________________________
Rich snippets are designed to summarize the content of a page in a way that makes it even easier for users to understand what the page is about in our search results.
______________________________ ______________________________ _________
Negative keywords
A type of keyword that prevents your ad from being triggered by a certain word or phrase. It tells Google not to show your ad to anyone who is searching for that phrase.
A type of keyword that prevents your ad from being triggered by a certain word or phrase. It tells Google not to show your ad to anyone who is searching for that phrase.
______________________________ ______________________________ _________
Canonicalization
SEO advice: url canonicalization
______________________________ ______________________________ _________
Black hat Technique
Cloaking
Cloaking is a search engine optimization (SEO) technique in which the content presented to the search engine spider is different from that presented to the user's browser. This is done by delivering content based on the IP addresses or the User-Agent HTTP header of the user requesting the page. When a user is identified as a search engine spider, a server-side script delivers a different version of the web page, one that contains content not present on the visible page, or that is present but not searchable. The purpose of cloaking is sometimes to deceive search engines so they display the page when it would not otherwise be displayed (black hat SEO). However, it can also be a functional (though antiquated) technique for informing search engines of content they would not otherwise be able to locate because it is embedded in non-textual containers such as video or certain Adobe Flash components. As of 2006, better methods of accessibility, including progressive enhancement, are available, so cloaking is no longer considered necessary by its proponents.[who?]
Cloaking is often used as a spamdexing technique to try to trick search engines into giving the relevant site a higher ranking. By the same method, it can also be used to trick search engine users into visiting a site that is substantially different from the search engine description, including delivering pornographic content cloaked within non-pornographic search results.
Cloaking is a form of the doorway page technique.
Doorway pages
Doorway pages are web pages that are created for spamdexing. This is for spamming the index of a search engine by inserting results for particular phrases with the purpose of sending visitors to a different page. They are also known as bridge pages, portal pages, jump pages, gateway pages, entry pages and by other names. Doorway pages that redirect visitors without their knowledge use some form of cloaking. This usually falls under Black Hat SEO.
Doorway pages are web pages that are created for spamdexing. This is for spamming the index of a search engine by inserting results for particular phrases with the purpose of sending visitors to a different page. They are also known as bridge pages, portal pages, jump pages, gateway pages, entry pages and by other names. Doorway pages that redirect visitors without their knowledge use some form of cloaking. This usually falls under Black Hat SEO.
If a visitor clicks through to a typical doorway page from a search engine results page, in most cases they will be redirected with a fast Meta refresh command to another page. Other forms of redirection include use of Javascript and server side redirection, from the server configuration file. Some doorway pages may be dynamic pages generated by scripting languages such as Perl and PHP.
Landing pages are regularly misconstrued to equate to Doorway pages within the literature. Another form of doorway pages are using a method called Cloaking.
keyword Stuffing
Keyword stuffing is considered to be an unethical search engine optimization (SEO) technique, which leads to banning a website from major search engines either temporarily or permanently. Keyword stuffing occurs when a web page is loaded with keywords in the meta tags or in content of a web page. The repetition of words in meta tags may explain why many search engines no longer use these tags.
Keyword stuffing had been used in the past to obtain top search engine rankings and visibility for particular phrases. This method is completely outdated and adds no value to rankings today. In particular, Google no longer gives good rankings to pages employing this technique.
Hiding text from the visitor is done in many different ways. Text colored to blend with the background, CSS "Z" positioning to place text "behind" an image — and therefore out of view of the visitor — and CSS absolute positioning to have the text positioned far from the page center are all common techniques. By 2005, many invisible text techniques were easily detected by major search engines.
Link Forms
On the World Wide Web, a link farm is any group of web sites that all hyperlink to every other site in the group. In graph theoretic terms, a link farm is a clique. Although some link farms can be created by hand, most are created through automated programs and services. A link farm is a form of spamming the index of a search engine (sometimes called spamdexing or spamexing). Other link exchange systems are designed to allow individual websites to selectively exchange links with other relevant websites and are not considered a form of spamdexing.
A diagram of a link farm. Each circle represents a website, and each arrow represents a pair of hyperlinks between two websites.
Spamdexing
In computing, spamdexing (also known as search engine spam, search engine poisoning, Black-Hat SEO, search spam or web spam) is the deliberate manipulation of search engine indexes. It involves a number of methods, such as repeating unrelated phrases, to manipulate the relevance or prominence of resources indexed in a manner inconsistent with the purpose of the indexing system. It could be considered to be a part of search engine optimization, though there are many search engine optimization methods that improve the quality and appearance of the content of web sites and serve content useful to many users. Search engines use a variety of algorithms to determine relevancy ranking. Some of these include determining whether the search term appears in the body text or URL of a web page. Many search engines check for instances of spamdexing and will remove suspect pages from their indexes. Also, people working for a search-engine organization can quickly block the results-listing from entire websites that use spamdexing, perhaps alerted by user complaints of false matches. The rise of spamdexing in the mid-1990s made the leading search engines of the time less useful. Using unethical methods to make websites rank higher in search engine results than they otherwise would is commonly referred to in the SEO (Search Engine Optimization) industry as "Black Hat SEO."
Common spamdexing techniques can be classified into two broad classes: content spam (or term spam) and link spam.
Spamdexing was a big problem in the 1990s, and search engines were fairly useless because they were compromised by spamdexing. Once Google came on the scene, that all changed – Google developed a page ranking system that fought against spamdexing quite well, discounting spam sites and awarding true, relevant websites with high page rankings.
Spamdexing was a big problem in the 1990s, and search engines were fairly useless because they were compromised by spamdexing. Once Google came on the scene, that all changed – Google developed a page ranking system that fought against spamdexing quite well, discounting spam sites and awarding true, relevant websites with high page rankings.
URL Redirection
URL redirection, also called URL forwarding, is a World Wide Web technique for making a web page available under more than one URL address. When a web browser attempts to open a URL that has been redirected, a page with a different URL is opened. Similarly, domain redirection or domain forwarding is when all pages in a URL domain are redirected to a different domain, as when wikipedia.com and wikipedia.net are automatically redirected to wikipedia.org. URL redirection can be used for URL shortening, to prevent broken links when web pages are moved, to allow multiple domain names belonging to the same owner to refer to a single web site, to guide navigation into and out of a website, for privacy protection, and for less innocuous purposes such as phishing attacks
______________________________ ______________________________ _________
Purposes of URL Redirection
Purposes
There are several reasons to use URL redirection :
Similar domain names
A user might mis-type a URL—for example, "example.com" and "exmaple.com". Organizations often register these "mis-spelled" domains and re-direct them to the "correct" location: example.com. The addresses example.com and example.net could both redirect to a single domain, or web page, such as example.org. This technique is often used to "reserve" other top-level domains (TLD) with the same name, or make it easier for a true ".edu" or ".net" to redirect to a more recognizable ".com" domain.
Moving pages to a new domain
Web pages may be redirected to a new domain for three reasons:
· a site might desire, or need, to change its domain name;
· an author might move his or her individual pages to a new domain;
· two web sites might merge.
With URL redirects, incoming links to an outdated URL can be sent to the correct location. These links might be from other sites that have not realized that there is a change or from bookmarks/favorites that users have saved in their browsers.
The same applies to search engines. They often have the older/outdated domain names and links in their database and will send search users to these old URLs. By using a "moved permanently" redirect to the new URL, visitors will still end up at the correct page. Also, in the next search engine pass, the search engine should detect and use the newer URL.
Logging outgoing links
The access logs of most web servers keep detailed information about where visitors came from and how they browsed the hosted site. They do not, however, log which links visitors left by. This is because the visitor's browser has no need to communicate with the original server when the visitor clicks on an outgoing link.
This information can be captured in several ways. One way involves URL redirection. Instead of sending the visitor straight to the other site, links on the site can direct to a URL on the original website's domain that automatically redirects to the real target. This technique bears the downside of the delay caused by the additional request to the original website's server. As this added request will leave a trace in the server log, revealing exactly which link was followed, it can also be a privacy issue.
The same technique is also used by some corporate websites to implement a statement that the subsequent content is at another site, and therefore not necessarily affiliated with the corporation. In such scenarios, displaying the warning causes an additional delay.
Short aliases for long URLs
Main article: URL shortening
Web applications often include lengthy descriptive attributes in their URLs which represent data hierarchies, command structures, transaction paths and session information. This practice results in a URL that is aesthetically unpleasant and difficult to remember, and which may not fit within the size limitations of microblogging sites. URL shortening services provide a solution to this problem by redirecting a user to a longer URL from a shorter one.
Meaningful, persistent aliases for long or changing URLs
Sometimes the URL of a page changes even though the content stays the same. Therefore URL redirection can help users who have bookmarks. This is routinely done on Wikipedia whenever a page is renamed.
Post/Redirect/Get
Main article: Post/Redirect/Get
Post/Redirect/Get (PRG) is a web development design pattern that prevents some duplicate form submissions, creating a more intuitive interface for user agents (users).
Manipulating search engines
Redirect techniques are used to fool search engines. For example, one page could show popular search terms to search engines but redirect the visitors to a different target page. There are also cases where redirects have been used to "steal" the page rank of one popular page and use it for a different page, They will also redirect using searches with search engines as searches, usually involving the 302 HTTP status code of "moved temporarily."
Search engine providers have noticed the problem and are working on appropriate actions.[citation needed]
As a result, today, such manipulations usually result in less rather than more site exposure.
Manipulating visitors
URL redirection is sometimes used as a part of phishing attacks that confuse visitors about which web site they are visiting.[citation needed] Because modern browsers always show the real URL in the address bar, the threat is lessened. However, redirects can also take you to sites that will otherwise attempt to attack in other ways. For example, a redirect might take a user to a site that would attempt to trick them into downloading antivirus software and, ironically, installing a trojan of some sort instead.
Removing referer information
When a link is clicked, the browser sends along in the HTTP request a field called referer which indicates the source of the link. This field is populated with the URL of the current web page, and will end up in the logs of the server serving the external link. Since sensitive pages may have sensitive URLs (for example, http://company.com/plans-for- the-next-release-of-our- product), it is not desirable for the referer URL to leave the organization. A redirection page that performs referrer hiding could be embedded in all external URLs, transforming for example http://externalsite.com/page into http://redirect.company.com/ http://externalsite.com/page. This technique also eliminates other potentially sensitive information from the referer URL, such as the session ID, and can reduce the chance of phishing by indicating to the end user that they passed a clear gateway to another site.
______________________________ ______________________________ ______________
What is Schema
Schema is the new way of Google, Yahoo and Bing to sort the whole internet out. It is a system that will make it easier for their search engines to identify what a site, or even a paragraph, is all about. So now that we know, the question to ask yourself is, how will Schema affect SEO?
Schema is the new way of Google, Yahoo and Bing to sort the whole internet out. It is a system that will make it easier for their search engines to identify what a site, or even a paragraph, is all about. So now that we know, the question to ask yourself is, how will Schema affect SEO?
A Different Kind of Code
One thing you need to know about Schema is that it isn’t something like Meta Tags. It’s much different from that. Schema codes are inserted into div tags and h1 tags and span tags. The way the Schema code is integrated is such that the whole HTML code will not be affected. Very nice.
Search is becoming more and more complex as humanity, technology, lifestyle and everything about our world becomes more complex. There are now more things and more categories than ever before. Schema is just a tool to help search engines know which page falls under which category. It is merely a helping hand for the big three search engines – Google, Yahoo and Bing
In Simple Terms
Schema is just a small piece of code added to your HTML that indicates to search engines what a certain page or paragraph is all about. There’s nothing magical about it. Google, Yahoo and Bing are using it to further enhance the artificial intelligence of their search engine.
The Schema code<div itemscope itemtype =http://schema.org/ CreativeWork> <h1 itemprop="about"> Why Schema Might be the Next BIG Ranking Factor</h1> <div itemprop="author" itemscope itemtype="http://schema.org/ Person"> Author: <span itemprop="name">Sean Si</span> (born <span itemprop="birthDate">September 6, 1988)</span> </div> <span itemprop="genre">SEO</span> <span itemprop="keywords">Schema, SEO, Ranking Factor</a> </div>
Disadvantages
As you’ve already guessed, the downside about Schema code is that it is very tasking to install and implement in all your pages. Unless you have only a handful of pages. Take this website for example – SEO Hacker. I have numerous pages in this whole site and implementing Schema code in each will take me hours and hours.
I won’t be implementing Schema on every page – just on the ones that I would like to test on. After all, there is no real solid proof yet that Schema-embedded sites do rank higher than those which are not.
Another disadvantage of Schema is that it cannot be implemented in a WordPress post – this is because WordPress automatically deletes HTML codes that it doesn’t recognize. You can put it inside your template editor though. For example, if you’re using Thesis theme, you can put it inside using your Thesis Hook plugin.
So how will Schema Affect SEO? - But as of today there is no real evidence that Schema has helped the ranking of some websites which have embedded the Schema code. We cant say how it will effects.
Tips for keeps: Implement the Schema code in one or two pages in your website where you want customers/readers to land. See how it affects your rankings.
How to get page indexed by Google in 10 to 30 mins
Index Your Content Faster With the Fetch as Google Tool-
Utilizing the various functions that Google Webmaster Tools is the best way. Amongst the toolkit is the Fetch as Google option, which also gives users an opportunity to submit their URL to the index. Surprisingly, this tool is often under-utilized by bloggers, webmasters, and SEO strategists. This is a convenient way to speed things up considerably if you have new content that you'd like to be discovered and found in the SERPs.
Submitting your link to the index using the Fetch as Google tool is like pressing a magic button. Google states that they will crawl the URL using this method usually within a day, however, I've seen web pages and blog posts show up in the SERPs in less than 5 minutes of using this tool.
______________________________
What is DoFollow?
Understanding DoFollow
"DoFollow" is simply an internet slang term given to web pages or sites that are not utilizing "NoFollow." NoFollow is a hyperlink value that tells search engines not to pass on any credibility or influence to an outbound link.
Originally created to help the blogging community reduce the number of inserted links into a "comment" area of a blog page, the attribute is typically standard in blog comments. It helps overwhelmed webmasters disallow spammers from gaining any kind of advantage by inserting an unwanted link on a popular page, and has become an integral part of Google-specific SEO.
How DoFollow & NoFollow Have Affected Link Building
As a result of the implementation of NoFollow, the process of building links has taken a steep turn. Many sites, including wikis, social bookmarking sites, corporate and private blogs, commenting plug-ins and many other venues and applets across the internet began implementing NoFollow.
This made effective link building difficult for both honest people and spammers alike. It also made DoFollow links become the "Holy Grail" of SEOs everywhere, who seek them out as expensive collections to their off-site optimization repertoire.
NoFollow Isn't Bad
There's nothing wrong with getting NoFollow links. In fact, you'll want to get an equal amount of them as well. While they don't pass on link juice, they do help associate your site with anchor text (the keyword phrase that makes up the URL pointing to your site). They also increase the exposure of your site, overall, which may eventually lead to you getting more mentions via DoFollow links!
______________________________ ______________________________ _______________
- 301 – Moved Permanently – Passes link equity and saves PageRank – Use new URL from now on.
- 302 – Found – Page temporarily located at a different URL – Continue to use old URL.
- 303 – See Other – Page found under a different URL – New URL not a substitute for originally requested resource.
- 307 – Temporary Redirect – Page temporarily located at a different URL – Redirection MAY be altered on occasion – Continue to use original URL.
______________________________ ______________________________ _______________
- Fetch—quick check
When the Fetch as Google tool is in fetch mode, Googlebot crawls any URL that corresponds to the path that you requested. If Googlebot is able to successfully crawl your requested URL, you can review the response your site sent to Googlebot. This is a relatively quick, low-level operation that you can use to check or debug suspected network connectivity or security issues with your site.
- Fetch and render—deeper view
The fetch and render mode tells Googlebot to crawl and display your page as browsers would display it to your audience. First, Googlebot gets all the resources referenced by your URL such as picture, CSS, and JavaScript files, running any code. to render or capture the visual layout of your page as an image. You can use the rendered image to detect differences between how Googlebot sees your page, and how your browser renders it.
______________________________ ______________________________ _______________
List of Dofollow Social Media Sites
http://www.socialmediatoday. com/content/top-10-social- media-sites-get-dofollow- links-2013
______________________________ ______________________________ _______________
http://www.socialmediatoday.
______________________________
Building a Video SEO Strategy
Hidden or invisible text
Unrelated hidden text is disguised by making it the same color as the background, using a tiny font size, or hiding it within HTML code such as "no frame" sections, alt attributes, zero-sized DIVs, and "no script" sections. People screening websites for a search-engine company might temporarily or permanently block an entire website for having invisible text on some of its pages. However, hidden text is not always spamdexing: it can also be used to enhance accessibility.
Meta-tag stuffing
This involves repeating keywords in the Meta tags, and using meta keywords that are unrelated to the site's content. This tactic has been ineffective since 2005.
Doorway pages
"Gateway" or doorway pages are low-quality web pages created with very little content, but are instead stuffed with very similar keywords and phrases. They are designed to rank highly within the search results, but serve no purpose to visitors looking for information. A doorway page will generally have "click here to enter" on the page. In 2006, Google ousted BMW for using "doorway pages" to the company's German site, BMW.de.
Scraper sites
Scraper sites are created using various programs designed to "scrape" search-engine results pages or other sources of content and create "content" for a website.[citation needed] The specific presentation of content on these sites is unique, but is merely an amalgamation of content taken from other sources, often without permission. Such websites are generally full of advertising (such as pay-per-click ads), or they redirect the user to other sites. It is even feasible for scraper sites to outrank original websites for their own information and organization names.
Article spinning
Article spinning involves rewriting existing articles, as opposed to merely scraping content from other sites, to avoid penalties imposed by search engines for duplicate content. This process is undertaken by hired writers or automated using a thesaurus database or a neural network.
Machine translation
Similarly to Article spinning, some sites use machine translation to render their content in several languages, with no human editing, resulting in unintelligible texts.
No comments:
Post a Comment
Thx for Submitting your Comment.
Visit http://www.ramitsolutions.com for more updates, tips, jobs, course related timings and services