Project Options

Here you have the key to success for your project. You can define almost every aspect that's important for a successful SEO strategy.

The following settings will specify how your project is submitting and verifying links.

To make a link building process as natural as possible, you might want to use this option to not create too many links at once. You can click on the label “Pause the project after” to change it to “Stop the project after”.

Same goes for the type. When you click on “submission for” you can change it to other optiosn.

OptionDescription
submissions forpause/stop the project after XYZ submissions made
verifications forpause/stop the project after XYZ newly verified URLs
submissions reached a daypause/stop the project after XYZ submissions reached per day
submissions reached a daypause/stop the project after XYZ newly verified URLs reached per day

When you enable the option “per URL” it is counting the amount of submissions/verifications for all project URLs separately and not for all together. This can be useful for tier projects where the amount of URLs is growing over time so you want to have the number of created backlinks in a certain time also to be growing.

Important notes:

  • If you use the option to stop/pause on newly verified URLs created, you will most likely see it going over the set number of URLs as it is always trying to verify even when on pause modus. The project can not know when a link is put live online by site administrators so it is a bit difficult to find the right time.
  • The pause/stop option for submitted URLs is only counting the submissions made where an URL is involved (a link is placed) and not the submissions required to create accounts. You will most likely see more submissions made than you set up. But this is not really a problem.
  • Using a custom verification time together with a pause/stop of projects submitted URLs is a bit dangerous as the project might go over the limit easily if the time set for custom verification is to high. You should set it to 30 minutes maximum here.

A lot submissions require you to fill captchas. These captcha codes can be filled automatically when using captcha services or programs. If you have configured some captcha services on the global options, you are able to use them here in the project or just a few once (clck on the label to change it's behavior).

If you enable the solving by captcha services, you can set it up to use a custom retry. This will download a new captcha image in the hope that this one can be solved better than the previous one which failed. It overwrites the setting from global program options.

The option to “Skip hard to solve captchas” is skipping ReCaptchas and SolveMedia. In the past it also skipped Mollom captchas but a latest update of Captcha Breaker solved that by over 50% so it was removed to skip that one.

The custom mode is asking you to enter/change the data before the actual submission takes place. This can be very useful for projects where you want to make sure that the submitted data is actually related to the topic of the webpage. You can e.g. use it for Blog Comment projects and when a site is found to submit to, it asks you to review the data to be send to it (includes a small description of the webpage and it's content).

Only a verified link is worth anything. Any submissions made by our software will get checked to be live and visible (even for not logged in users). And only those links are added to the project's verified URLs. However verification means also email verification (e.g. an account creation will require you to click a link on the email sent). The program is taking care of everything for you so that you can concentrate on other things. There are several other options that will influence the verification process.

You have to use this option if your project URLs are from youtube or other WEB2.0 sites. Else the program will just check for http://www.youtube.com to be present on the link and skip the rest. However if you use the “Fast Indexer” engine, you should not use this option to also get verifies from statistics sites who usually only link to the root URL.

When to Verify

You should leave this to Automatically unless you have special things in mind. That can e.g. be to always submit to sites as fast as possible and not waste time on verification.

Don't remove URLs

This will try to verify submitted links endlessly when though the program would have removed them already thinking it's useless to hope the link turns live.

Remove after 1st verification try

This option is doing the opposite as the above option and removes a link right after it has been checked once.

Check E-Mails only

This option will check for activation links on emails only to continue submission (login and submission of links). It is not verifying if the submitted URL is live and visible.

When enabled, the project is also verifying previously created and verified URLs. This makes sense in two scenarios:

  1. The project is a tier project and should only build links to URLs from the main project who are still there.
  2. The project should have a clean list of verified URLs and remove all those who have been removed by administrators or have been no longer present due to a server error or other issues.

A link might also be no longer present when it moves from the previous verified URL to Page 2 or further if new URLs are posted to that side. If you want to keep the sites, you might want to use the option “Skip engines with moving pages”.

A character spinning is exchanging certain letters in words with data looking the same for humans but are actually completely different letters from e.g. another foreign alphabet or math symbols.

For example “WҺɑts cҺаrасteг ѕpіnning аnyѡaу” is readable by anyone but it actually includes letters/symbols not from the Latin alphabet. All the underlined characters are not the once you expect.

You might ask yourself whats the use of this? It is useful to create none duplicate content where search engines will see this as a new article/sentence because they can not find the same in there database. This however makes only sense if you use it for words you are not ranking for. Thats why you have an option to use it only on stop words. The program is also trying not to use it on anchor texts or html syntax.

Even though you try not to make mistakes on posting data when it comes to Grammar or spelling, it is looking more natural than 100% correctly written content. It can be used for Blog Comment projects where you usually write just a short sentence to comment on articles or blog entries.

Possible spell errors the program is automatically generating are:

  • duplicating a character in a word
    Example: Ranking becomes Raanking
  • skip a character in a word
    Example: Ranking becomes Rnking
  • reverse two letters in a word
    Example: Ranking becomes Rnaking
  • usage of a letter next to the wanted one on a US keyboard.
    Example: Ranking becomes Rsnking as s is next to a on the keyboard.
  • add additional letter in a word next to the wanted one on a US keyboard.
    Example: Ranking becomes Rasnking

Some engines allow you to place a link in there descriptions using html or bbcode syntax. This is recognized by the program and a link is placed there as well. Else there might just be a link added for the field asking for a “homepage”. This can get you an back link having your anchor text in it instead of a plain URL link. However it also leaves a small footprint as the text added to the description might be guessable to be a automated submission.

Continuously try to post even if failed before

Once a submission is made to a site, it is not tried to post again to it in the future to save time and resources for other sites. However this might as well be a good site to post to if the reason for this failed submission was a temporary problem in the past like a not working captcha solution, server problem or proxy issue. With this option enabled, it will try to always submit to that site if it was not possible before (not in verified URL list or submitted one).

TAGS are keywords you can assign to some articles or blog posts for certain platforms. You can specify here what you want to take for it. This setting is also used for the articles created when searching for related images or videos to insert to it.

This is a rather special option for advanced users who are modifying the dat files coming with the software. A good example is the modification of e.g. the names.dat and lnames.dat file that you can find in the engines folder of the program. It includes first and last names to be used on submissions whenever a program requires such. You can now e.g. create an folder on c:\SER_DATA\Russian, put the files into it and change it it's content to only include Russian names and Cyril characters. If you now use that folder in a project, it will always try to locate the file there before using the original one (e.g. if not found).

This is also very useful when your articles are not English but in another language. You can then copy over the auto_anchor-*.dat files to your custom folder, translate it to your language and the article content would be all in the same language.

The term target URLs is used within the software to describe the sites it will create backlinks on. There are several ways to get the URLs described below.

The box has a lot of search engines you can choose from. All those will be used to search for new target URLs. But please make sure not to use search engines in countries that don't speak the language from your projects keywords. This would make no sense. It's also not recommended to use all search engines. Right click on the box and you can choose between many options to auto select the engines you want.

Please note that not every engine is using the projects keyword to locate new sites. The dark green colored engines will use them always, light green sometimes and the yellow once will not use it at all.

You can also add new search engines if you think one is missing.

Always use keywords to find targets

As written above, there are just a few engines that will use keywords to locate new targets. With this option turned on, all will use a keyword from your project. This however limits the chance to get enough target URLs.

Add stop words to query with X%

Stop words are those who are commonly used in a language to give a sentence a meaning. They can be used for search queries to increase the chance finding new places to link to. The program comes with a lot stop words in different languages that you can however edit and change.

The global site list can be enabled in program options and will hold a list of previously found or imported URLs. Instead of searching for each project with the same query, it is useful to store the results instead and not waste resources on this. It's highly recommended to make use of this feature. Also note that there are a lot people selling site lists on our forum where you can get fast results without letting the program find sites.

There are different types of site lists to choose from.

  • identified - An URL was successfully identified to be based on a supported engine
  • submitted - An URL was identified and submitted to
  • verified - An URL was identified, submitted and a link was actually present later on
  • failed - An URL was identified but later submission failed.

The option to not add new found URLs is just what it indicates, not adding new entries to it.

This option will take one of your already verified URLs from the project, download the content and extract all external links from it. The links are then used as target URLs in the hope someone else has place a link on the same place and it is a site based on one of the supported engines. What might sound a bit wired to you is actually working very nicely because people do not link there main site on e.g. guest book sites but they link from guest books to social network sites and than to there main site.In the result you get many potential new target URLs on those types of platforms.

Analyze and post to competitors backlinks

This is taking one of your keywords and does a research who is dominating the search engines with this keyword. All found backlinks that this competitor is linking on is taken into your projects target URLs to also try and place a link on it.

This option allows you to post on the same site again. This can happen when registering another account or when the program just logs in and e.g. posts another blog entry/article.

Post first article without link

This is useful to not get detected as someone building links automatically. Some sites will see you posting an article and instantly delete the account if they see it promoting a site. Usually it is not a problem if you have links on further postings.

This option is build with the same idea in mind as the option above. It increases chance to have your account stay longer active. It's also more natural for search engines having an article with no links at all and maybe only citations on it.

A new account to the same site is only created if the entered time has passed.

Time to wait before first post

This option is useful to not get an account deleted that fast. Usually site administrators just watch the accounts created on a certain time frame and do no longer monitor peoples behavior later on. It can be useful to e.g. submit the first post like one or two days later. But please keep in mind that you will not see a verified link for that time frame when creating a new project.

This option is made to not spam a site too much and trigger the administrator's attention. It's useful to not submit an article on the same site like every minute but wait a couple of days.

Each account can be used to post new articles. Please make use of this instead of creating new accounts. The +/- is for keeping that value a bit more random and natural.

A new account is created and new articles are submitted. Please make sure you have at least that defined number of emails defined as usually no site will allow you to sign up a new account with an email used before.

This section is trying to filter out the found/imported URLs based on your options.

This Filter is also called OBL (Outgoing BackLinks) and will check how many links are placed on a page. A page is not really a good place to put a link on if there are more than 150 links on it. This is also called “bad neighborhood”.

PR is the short term for Page Rank and is a way for google to say how valuable a site is. A PR of 10 is the a site can have and 0 is the worst. However there haven't been a PR update for many months now and it seems as google is replacing this in the future. It also doesn't mean much if a site has a PR of 0 or not, usually it helps ranking a page as well. You should keep this unchecked. However if you still plan to use it make sure that an extensive usage of this PR checking is slowing down the link building and eventually gets you banned on google for doing too many queries.

You can also check the option to skip sites with a PR above a certain value. It's not really an option most people would use as the aim is to get backlinks with a high PR. However there are situations where you want to save sites with a high PR for other projects or tiers and skip them for low level projects.

The PR value can be checked against the URL, Domain or SubDomain. The PR of an URL is usually way smaller than the one against the Domain. You better keep it to domain level.

Skipping also unknown PR is another option. If turned on make sure you do not get banned on PR checking, else you will not be able to build any links as all PR queries will be unknown.

This box allows you to define the types of backlinks you want and those you want to skip. Some engines can create several types of links and will skip a certain one if you unchecked it here. The most valuable are articles with contextual links (your anchor text with your URL surrounded by text based on your project setting) followed by anchor text links (your anchor text with your URL). The rest are usually plain URLs or other types of links.

NoFollow means that the link placed is kind of ignored by a search engine. However this is not really the case. There are signs that even these links are valuable. Especially when coming from sites with a high PR. Having a lot links with DoFollow links is unnatural as well and you should consider not to use this filter. If you click on the option label you can change it to create only DoFollow links.

Please note that this option is not 100% working. The software doesn't know what link type it is creating for some engines. Before it posts, it checks the page for the amount of already placed nofollow links and will decide to set a link or not.

You don't want this one to be checked really. It's a special option not useful for most of us. However if checked it will skip submission to all sites with a subdomain like http://sub.domain.com.

It happens that the software finds target sites without an domain but an IP address only. Not happening a lot but in that case you might want to skip posting to it as the software e.g. had already posted to it using there real domain.

This is a useful option to only build links to sites related to the project. If turned on it will check any of the added keywords to be present on the page (click the label to change where it has to appear). Some engines create there own page that you will with data so it is not important for them, but might be important for other engines like Blog Comments. It's up to you if you want to make use of this filter.

Every word on this list is checked on the page's visible text and if present results in a skip. You can import your own list of words or use the default one. A skip word can be defined in two ways: 1. you enter a so called mask like *badword* 2. you enter just the word like badword

In example 1 you will skip all sites where the string badword is included.
In example 2 you will skip just sites where the word badword is present. The difference is that there must be a none alpha charater in front and back of that word.

This is basically the same as the above filter. The only difference is that a filter with badword is doing the same as with *badword* because domains often have no separator between words.

Before posting the country of a site is detected in two ways.

1. By checking the top level domain (e.g. gsa-online.de is having a top level domain .de and should be located in Germans).
2. By checking against the IP it is registered on. The program comes with a database of IPs and there country location.

The second detection is only done if the first one is not giving a clear result (e.g. a .com, .org or .net domain is not related to a country).

The detection of a sites language is done in 2 ways.

1. The page is analyzed for special html tags where the language is defined.
2. Checking against country as in above option).

If detection in point 1 failed, it is trying to get the language from the country. This is not working for every country of course as in some countries more than one language is spoken. But for many countries this works fine like a Germany has always German as language. The program also takes the most closest match in case a country is known to have a language almost everyone is speaking there.