Skip to main content

Posts

Benefits of Pay Per Click Campaigns

It gives your Business the benefit of: Increased traffic to your site Increased Leads and thereby increased Sales Higher Return on Investment (ROI) Targeting specific markets with specific ads Creating Brand Awareness and the necessary buzz around your Brand

Pay Per Click

Pay-per-click advertising, also known as PPC, is a form of online advertising whereby visitors are directed to an advertiser's website after clicking on an advertisement, and the advertiser pays for each visitor on a per-click basis. Each click can be anywhere from 1 pence to several pounds in cost, depending on many factors, including the pay-per-click search engine being used to advertise on, and the search phrase that is being targeted. Perhaps the best example of pay-per-click advertising is Google's PPC advertising called Google AdWords. In the screengrab of a Google search results page below, the PPC results have a red box around them. Google Adwords, in a nutshell, works like this; when a Google search engine user types in an applicable search phrase for your business and you are bidding on that search phrase your text ad has a chance to be displayed. You are  only  charged when the search engine user clicks on your text ad, hence the name pay-per-click. Successful PPC...

Online Searching

WebCrawler

Brian Pinkerton of the University of Washington released  WebCrawler  on April 20, 1994. It was the first crawler which indexed entire pages. Soon it became so popular that during daytime hours it could not be used. AOL eventually purchased WebCrawler and ran it on their network. Then in 1997, Excite bought out WebCrawler, and AOL began using Excite to power its NetFind. WebCrawler opened the door for many other services to follow suit. Within 1 year of its debuted came Lycos, Infoseek, and OpenText.

Parts of a Search Engine:

Search engines consist of 3 main parts. Search engine  spiders  follow links on the web to request pages that are either not yet indexed or have been updated since they were last indexed. These pages are crawled and are added to the search engine  index  (also known as the catalog). When you search using a major search engine you are not actually searching the web, but are searching a slightly outdated index of content which roughly represents the content of the web. The third part of a search engine is the  search interface and relevancy software . For each search query search engines typically do most or all of the following Accept the user inputted query, checking to match any advanced syntax and checking to see if the query is misspelled to recommend more popular or correct spelling variations. Check to see if the query is relevant to other vertical search databases (such as news search or product search) and place relevant links to a few items from th...

What is a Bot?

Computer robots are simply programs that automate repetitive tasks at speeds impossible for humans to reproduce. The term bot on the internet is usually used to describe anything that interfaces with the user or that collects data. Search engines use "spiders" which search (or spider) the web for information. They are software programs which request pages much like regular browsers do. In addition to reading the contents of pages for indexing spiders also record links. Link citations can be used as a proxy for editorial trust. Link anchor text may help describe what a page is about. Link co citation data may be used to help determine what topical communities a page or website exist in. Additionally links are stored to help search engines discover new documents to later crawl. Another bot example could be Chatterbots, which are resource heavy on a specific topic. These bots attempt to act like a human and communicate with humans on said topic.

How Web Search Engines Work

A search engine operates in the following order: Web crawling Indexing Searching Web search engines work by storing information about many web pages, which they retrieve from the  HTML  itself. These pages are retrieved by a  Web crawler  (sometimes also known as a spider) — an automated Web browser which follows every link on the site. Exclusions can be made by the use of  robots.txt . The contents of each page are then analyzed to determine how it should be  indexed  (for example, words can be extracted from the titles,page content, headings, or special fields called  meta tags ). Data about web pages are stored in an index database for use in later queries. A query can be a single word. The purpose of an index is to allow information to be found as quickly as possible. Some search engines, such as  Google , store all or part of the source page (referred to as a cache ) as well as information about the web pages, whereas others, s...