Wednesday 30 November 2011

What Is “(Not Provided)” in Organic Search Traffic Keywords on Google Analytics?

For about two months we’ve noticed a (Not Provided) item on Google Analytics traffic reports under “Traffic Sources/Google Organic” section. This is an article that has done a good job explaining it. The basic answer is that the keywords surfers used to find this particular site were not shown because the searchers were logged into their Google account when conducting the search.


Google wants to further protect their users’ privacy by encrypting their search results pages (through https://www.google.com).
In Nov. our site received 1,144 visitors from organic Google search which their search keywords were not shown (see below). This is 19.6% of our total Google organic search traffic.
The numbers are similar across the board for our other clients. The fact that we are running blind for about 20% of our total Google organic search traffic is creating a bit of frustration for our team. Mainly because we’re unable to show our clients (or our team) what keywords were used when the site was found on Google organic search traffic with 100% certainty (as we did prior to Oct.).
However, since the total number of organic search traffic has not been effected, we can use the other keywords to get a clear understanding of what keywords people search to find a particular site.
My opinion is that Google needs to find a way to fully show the keywords people use in finding sites. Otherwise, we may have to use other software to monitor our clients’ traffic.

Friday 25 November 2011

Importance of Keyword

The reason keyword analysis is therefore necessary is as a result of it’s regarding quite search engine behaviour. Keywords are the foremost solid proof you'll be able to realize when researching the approach your users assume. Even in depth user surveys won’t be quite as reliable because the data folks offer out when they’re not responsive to being watched. whereas your users might tell you that they are available to you for quality service or discount costs, keywords can tell you that they are available to you when they’re bored or that they are available to you simply when they’re when one thing terribly specific. Sometimes, keyword analysis will tell you that your users aren’t who you thought they were.

Thursday 17 November 2011

Raising awareness of cross-domain URL selections

A piece of content can often be reached via several URLs, not all of which may be on the same domain. A common example we’ve talked about over the years is having the same content available on more than one URL, an issue known as duplicate content. When we discover a group of pages with duplicate content, Google uses algorithms to select one representative URL for that content. A group of pages may contain URLs from the same site or from different sites. When the representative URL is selected from a group with different sites the selection is called a cross-domain URL selection. To take a simple example, if the group of URLs contains one URL from a.com and one URL from b.com and our algorithms select the URL from b.com, the a.com URL may no longer be shown in our search results and may see a drop in search-referred traffic.

Webmasters can greatly influence our algorithms’ selections using one of the currently supported mechanisms to indicate the preferred URL, for example using rel="canonical" elements or 301 redirects. In most cases, the decisions our algorithms make in this regard correctly reflect the webmaster’s intent. However, in some rare cases we’ve also found many webmasters are confused as to why it has happened and what they can do if they believe the selection is incorrect.
To be transparent about cross-domain URL selection decisions, we’re launching new Webmaster Tools messages that will attempt to notify webmasters when our algorithms select an external URL instead of one from their website. The details about how these messages work are in our Help Center article about the topic, and in this blog post we’ll discuss the different scenarios in which you may see a cross-domain URL selection and what you can do to fix any selections you believe are incorrect.

Common causes of cross-domain URL selection

There are many scenarios that can lead our algorithms to select URLs across domains.
In most cases, our algorithms select a URL based on signals that the webmaster implemented to influence the decision. For example, a webmaster following our guidelines and best practices for moving websites is effectively signalling that the URLs on their new website are the ones they prefer for Google to select. If you’re moving your website and see these new messages in Webmaster Tools, you can take that as confirmation that our algorithms have noticed.
However, we regularly see webmasters ask questions when our algorithms select a URL they did not want selected. When your website is involved in a cross-domain selection, and you believe the selection is incorrect (i.e. not your intention), there are several strategies to improve the situation. Here are some of the common causes of unexpected cross-domain URL selections that we’ve seen, and how to fix them:
  1. Duplicate content, including multi-regional websites: We regularly see webmasters use substantially the same content in the same language on multiple domains, sometimes inadvertently and sometimes to geotarget the content. For example, it’s common to see a webmaster set up the same English language website on both example.com and example.net, or a German language website hosted on a.de, a.at, and a.ch.Depending on your website and your users, you can use one of the currently-supported canonicalization techniques to signal to our algorithms which URLs you wish selected. Please see the following articles about this topic:
    • Canonicalization, specifically rel="canonical" elements and 301 redirects
    • Multi-regional and multilingual sites and more about working with multi-regional websites
    • About rel="alternate" hreflang="x"
  2. Configuration mistakes: Certain types of misconfigurations can lead our algorithms to make an incorrect decision. Examples of misconfiguration scenarios include:
    1. Incorrect canonicalization: Incorrect usage of canonicalization techniques pointing to URLs on an external website can lead our algorithms to select the external URLs to show in our search results. We’ve seen this happen with misconfigured content management systems (CMS) or CMS plugins installed by the webmaster.To fix this kind of situation, find how your website is incorrectly indicating the canonical URL preference (e.g. through incorrect usage of a rel="canonical" element or a 301 redirect) and fix that.
    2. Misconfigured servers: Sometimes we see hosting misconfigurations where content from site a.com is returned for URLs on b.com. A similar case occurs when two unrelated web servers return identical soft 404 pages that we may fail to detect as error pages. In both situations we may assume the same content is being returned from two different sites and our algorithms may incorrectly select the a.com URL as the canonical of the b.com URL.You will need to investigate which part of your website’s serving infrastructure is misconfigured. For example, your server may be returning HTTP 200 (success) status codes for error pages, or your server might be confusing requests across different domains hosted on it. Once you find the root cause of the issue, work with your server admins to correct the configuration.
  3. Malicious website attacks: Some attacks on websites introduce code that can cause undesired canonicalization. For example, the malicious code might cause the website to return an HTTP 301 redirect or insert a cross-domain rel="canonical" link element into the HTML or HTTP header, usually pointing to an external URL hosting malicious content. In these cases our algorithms may select the malicious or spammy URL instead of the URL on the compromised website.In this situation, please follow our guidance on cleaning your site and submit a reconsideration request when done. To identify cloaked attacks, you can use the Fetch as Googlebot function in Webmaster Tools to see your page’s content as Googlebot sees it.

Wednesday 16 November 2011

Website optimization techniques

Website content optimization is common sense,who doesn't want a fast find site with engaging content.Here is a detailed optimization techniques-


Unique viitors
A server hits is an HTTP request for a single web objective.Ones web page viw can require many hits to the server.You can increase your unique audienc by providing fast,fast,revelent,engaging web pages.Tracking new unique visitors can help you track audience grouth.

Average time on site and length of site
Click Tracks is one of the best measure of user engagement.

Pages per visit
The number of pages that wew consumed during a visit is a broad and simple measure of user engagement.Pages per visit measure that can indicate possible flow states of high engagement.

Bounce rate
The bounce rate is the percentage of user who left your site without browsing to another page or terminating.You shouls examining pages with high bounce rates closely for improvement to content.

Conversation rates
The ratio of the number of objectives accomplished when compared to unique visitors is your converation rate.You can boost your converation rate.

Primary content consumptiom

Every site visit has to have entry point
.This is the persantage of time that a pag constitutes the impression of a site.

Path loss
Path loss is the persantage of times that page was seen within a visitors navigation path where the visitors was terminated without bouncing.Path loss can indicate incomplate information or misguide search marketing.

Keyword or campaign
Which keyword or campaigns are making you the most money or searching position.

Cost per conversation
If the cost per conversation for a particular campaign,like ad group or keyword is larger than the average sale value for the include items.

Tuesday 15 November 2011

Ten recent algorithm changes

Ten recent algorithm changes by Google.com

Cross-language information retrieval updates: For queries in languages where limited web content is available (Afrikaans, Malay, Slovak, Swahili, Hindi, Norwegian, Serbian, Catalan, Maltese, Macedonian, Albanian, Slovenian, Welsh, Icelandic), we will now translate relevant English web pages and display the translated titles directly below the English titles in the search results. This feature was available previously in Korean, but only at the bottom of the page. Clicking on the translated titles will take you to pages translated from English into the query language.



Snippets with more page content and less header/menu content: This change helps us choose more relevant text to use in snippets. As we improve our understanding of web page structure, we are now more likely to pick text from the actual page content, and less likely to use text that is part of a header or menu.

Better page titles in search results by de-duplicating boilerplate anchors: We look at a number of signals when generating a page’s title. One signal is the anchor text in links pointing to the page. We found that boilerplate links with duplicated anchor text are not as relevant, so we are putting less emphasis on these. The result is more relevant titles that are specific to the page’s content.
Length-based autocomplete predictions in Russian: This improvement reduces the number of long, sometimes arbitrary query predictions in Russian. We will not make predictions that are very long in comparison either to the partial query or to the other predictions for that partial query. This is already our practice in English.

Extending application rich snippets: We recently announced rich snippets for applications. This enables people who are searching for software applications to see details, like cost and user reviews, within their search results. This change extends the coverage of application rich snippets, so they will be available more often.

Retiring a signal in Image search: As the web evolves, we often revisit signals that we launched in the past that no longer appear to have a significant impact. In this case, we decided to retire a signal in Image Search related to images that had references from multiple documents on the web.

Fresher, more recent results: As we announced just over a week ago, we’ve made a significant improvement to how we rank fresh content. This change impacts roughly 35 percent of total searches (around 6-10% of search results to a noticeable degree) and better determines the appropriate level of freshness for a given query.

Refining official page detection: We try hard to give our users the most relevant and authoritative results. With this change, we adjusted how we attempt to determine which pages are official. This will tend to rank official websites even higher in our ranking.

Improvements to date-restricted queries: We changed how we handle result freshness for queries where a user has chosen a specific date range. This helps ensure that users get the results that are most relevant for the date range that they specify.

Prediction fix for IME queries: This change improves how Autocomplete handles IME queries (queries which contain non-Latin characters). Autocomplete was previously storing the intermediate keystrokes needed to type each character, which would sometimes result in gibberish predictions for Hebrew, Russian and Arabic.

Monday 7 November 2011

Google Search Algorithm Updates

If and Else Statement

The if and else statements can be very helpful in controlling the power and resulting output of your scripts.You can input your site with basically unlimitited possibilities.You can diffrent massage based on different content or persone's.

EXAMPLE OF CODE

$ages=45;
if($ages=1 && $ages<=4){
echo "Infant";
}else if($ages=5 && $ages<=12){
echo "child";
}else if($ages=13 && $ages<=19){
echo "Teen";
}else if($ages=20 && $ages<=40){
echo "Youth";
}else if($ages=41 && $ages<=60){
echo "adult";
}else{
echo "Enter correct value";
}
?>