Home | Archive | SEO Tools | Contact
« Previous Entries

Archive for the 'Search Engine Optimisation' Category

Blogs Worth Reading

Monday, December 15th, 2008

I’ve never done a round-up of the blogs I read before, which I guess is a bit selfish. So, in no particular order (and this isn’t a complete list) some of my favourite blogs, if you’re looking for some inspiration.

Dark SEO Programming is run by Harry. As he puts it, “SEO Tools. I make ‘em”. A great guy if you need help with coding and somewhat of a captcha guru, with a sense of humour. Definitely worth keeping up with. I wouldn’t be surprised if this guy starts making big Google waves in the next few years.

Ask Apache is a blog I absolutely love. Great, detailed tutorials on script optimisation, advanced SEO and mod_rewrite. AskApache’s blog posts are the kind of ones that live in your bookmarks, rather than your RSS Reader.

Andrew Girdwood is a great chap from BigMouthMedia I met last year (although I very much doubt he remembers that). Andrew seems to be a vigilante web bug hunter. What I like about his blog is that he is usually the first to find weird things with Google that are going down. This usually gets my brain rolling in the right direction of my next nefarious plan. ^_^

Blackhat SEO Blog run by busin3ss is always worth checking out. He was even kind enough to give me a pre-release copy of YACG mass installer to review (it’s coming soon – I’m still playing!). Apart from his excellent tools, his blog features the darker side of link building, which of course, interests me greatly.

Kooshy is a blog run by a guy I know, who.. Well I think he wants to remain anonymous (at least a little). He’s just got started again after closing down his last blog and moving Internet personas (doesn’t the mystery just rivet you?). Anyway, get in early, I think we can expect some good stuff from here. He’s already done a cool post on Pimpin’ Duplicate Content For Links.

Jon Waraas is run by.. Can you guess? Jon has something that a lot of even really smart Internet entrepreneurs are missing, good old fashioned elbow grease. This guy is a workaholic and it pays off in a big way. Apart from time saving posts on loads of different ways to monetise your site, build backlinks and flush out your competitors I get quite a lot of inspiration for his constant stream of effort and ideas. I could definitely take a leaf out of his work ethic book.

Blue Hat SEO is becoming one of the usual suspects really. If you’re here, you probably already know about Eli. Being part of my “let’s only do a post every few months club”, I love Eli’s blog because there is absolutely no fluff. He gets straight down to the business of overthrowing Wikipedia, exploiting social media and answering specific SEO questions. You’ll struggle to find higher quality out there.

SEO Book is probably the most “famous” blog I’m going to mention here. Aaron was off at a disadvantage, because to be honest, I thought he was a massive waste of space for quite a while. (I guess that’s what happens when you take your SEO youth on Sitepoint listening to the people with xx,xxx posts on there). I bought his SEO Book and for me, at least, it was way too fluffy. I’m pleased he’s started an SEO training service now as it represents much better value. I’m sure he was making a lot of money from his SEO Book, but perhaps milked it too long (like I probably would have). Anyway, I kept with his blog and I’ve been impressed with his attitude and posts. He’s done some really cool stuff, like the SEO Mindmap and more recently, a keyword strategy flowchart which would be useful for those looking to a more structured search approach. He’s also written about algorithm weightings for different types of keywords and of course has some useful SEO Tools.

Slightly Shady SEO – Great name, great blog. Although XMCP will probably take it as an insult, I’ve always regarded Slightly Shady as the blog most similar to mine on this list. Maybe it’s because I wish I’d written some of the posts he has, before he did, hehe. Again, a no BS approach to effective SEO, whether he’s writing about Google’s User Data Empire, hiding from it or site automation it’s all gravy.

The Google Cache is a great blog for analytical approaches to SEO. There are some awesome posts on Advanced Whitehat SEO and using proxies with search position trackers. I like.

SEOcracy is run by a lovely database overlord called Rob. Rob’s a cool guy, he was kind enough to donate some databases to include in the Digerati Blackbox a while back. Most of his databases are stashed away in his content club now, which is well worth a look in. He’s also done some enlightening posts on keyword research, stuffing website inputs and Google Hacking.

This is all I’ve got time for now, apologies if I’ve missed you. There may be a Part II in the near future.

Posted in Affiliate Marketing, Approved Services, Black Hat, Blogging, Digerati News, Google, Grey Hat, Marketing Insights, Research & Analytics, Search Engine Optimisation, Social Marketing, Splogs, Viral Marketing, White Hat, Yahoo | 7 Comments »

Understanding Optimum Link Growth

Friday, December 12th, 2008

Good evening all and Merry Christmas to all those who celebrate this time of year (you Pagans, you!). Rather than sit around the fire talking about yesteryear and smashing whiskey glasses into the fire, I’d like to talk to you about the far more interesting subject of link growth.

Link Growth on The Intertubes
For the context of this conversation (and by that I mean one-way lecture), I am assuming that everyone is defining link growth at the rate at which a domain as a whole and specific pages gain new backlinks. More importantly, how quickly search engines discover and “count” these backlinks.

I’ve blogged before about link velocity before and generally summerised that it was of course, a factor in how well your website ranks. However, as with most SEO topics, the devil is in the detail and there’s a lot of myths about the detail. So I would like to discuss:

1) What signals do “good” links and “notsogood” links give to your website?

2) How does domain age and your current backlink count play a part in determining your “optimal” link velocity?

3) Can you be harmed by incoming links?

These are what I believe are some of the most important (it’s definitely not all) factors attributing to link growth / velocity. As I want to have this blog post finished by Christmas, I’m going to try and stick around these core 3 points, although I’m sure I’ll end up running off at a tangent like I usually do. If however, you think I’ve missed something critical, drop me a comment and I’ll see if I can do a followup.

The difference between trust & popularity
When talking about links, it’s important to realise that there is a world of difference between a signal of trust and a signal of popularity. They are not mutually exclusive and to rank competitively, you’ll need signals of both trust and popularity, but for now realising they are different is enough.


For instance: Michael Jackson is still (apparently) very popular, but you wouldn’t trust him to babysit your kids now, would you? The guy down the road in your new neighbourhood might be the most popular guy in your street, but you’re not going to trust him until someone you know well gives him the thumbs up.

So for your site to rank well, Google needs to be able to have a degree of trust (e.g. source of incoming links, domain age, site footprints) to ensure your not just another piece of 2 bit webscum and it needs to know your content is popular (i.e. good content, link velocity, types of links). As I’ve already said, I’m not going to get into a drawn out debate about content here, just looking at links.

What comes first, trust or popularity?
It doesn’t really make much logical sense that you’ll launch a website and with no fanfare, you get a stream of hundreds of low quality links every week.

This kind of sits well with the original plan of the PageRank algorithm, which let’s not forget is actually (originally) trying to calculate the chance that a random surfer clicking around the web will bump into your site. This notion of a random surfer, clicking random links gave Google an excellent abstract to work out the whole “page authority” that the lion’s share of their algorithm sprang from.

Nowadays, you’ll hear lots of people trumping about going after quality (i.e. high PR links) rather lots of “low quality” (low PR links) while trying to remain relevant. From the algorithm origins point of view, the higher PR pages simply have more of these virtual random surfers landing on them; so more chance of a random surfer clicking your link.

Looking back at “time zero” when the PageRank started to propagate around the web, apart from internal PR stacking, all sites were equal, so PageRank was actually collected by raw numbers of links, rather than this “quality” (high PR) angle, which is actually just a cumulative effect of the PageRank algorithm (at least in its original form).

Hopefully, you’re still with more or not bored about going over fundementals, but without this level of understanding you’ll have a job getting your head around the more advanced concepts of link growth. Keep in mind here, I’m talking about pure PageRank in its original form (I’m sure it’s been updated since it was published), I’m not talking about ranking factors as a whole. To be honest, when I’m ranking websites (which I’m pretty good at), PageRank normally plays a very, very small role in my decision making, it is however useful as an abstract concept when planning linking strategies.

The point I’ve been eluding to here is, for Google to buy into the fact that yes your site is getting lots of natural “run of the mill” links, you firstly will need links from higher PageRank pages (or authorative pages, which are slightly different – bare with me). This line of thinking is of course assuming you don’t use a product like Google Analytics – (“Googlebot: Hmm, 58 visitors per month and 1,200 new incoming links per month, makes perfect sense!”).

Google is also pretty good at identifying “types” of websites and marrying this up to trust relationships. So for instance, I think most people would like a link from the homepage of the BBC News website, it’s a whopping PR9 and has bucket loads of trust. Here’s a question though: Is it a “relevant” link? The BBC News website covers a massive variety of topics, as most news sites do, so what is relevant and what is not is pretty much dependent on the story, which of course cover all topics. Does a link from the BBC News site mean your site is “popular”? No, (although it might make it so). Here’s a good question to ask yourself, between these two scenerios which is most believable:

1) Brand new site launched :: Couple of links from small blogs :: Gets 2,000 links in first month

2) Brand new site launched :: 1 linked from BBC News Homepage :: Gets 2,000 links in first month

Of course, you’ve hopefully identified situation 2 as the far more likely candidate. Lets consider what Google “knows” about the BBC website:

Googlebot says:

1) I know it’s a news website (varied topics)

2) I know millions of other sites link to it (it’s incredibly popular)

3) Lots of people reference deep pages (the content is of great content)

4) I see new content hourly as well as all the syndicated content I’m tracking (Fresh – as a news site should be)

5) It’s been around for years and never tried to trick me (another indicator of trust)

6) If they link to somebody, they are likely to send them lots of traffic (PR)

7) if they link to somebody, I can pretty much be sure I can trust this person they link to

Despite its critics, I’m a big believer in (at least some kind of) TrustRank system. It makes perfect sense and if you haven’t read the PDF, it’s very much worth doing so. In a hat tip to critics, it is incredibly hard to prove because of the dynamic nature of the web, it is almost impossible to seperate the effects of PageRank, relevance, timing, content and a myriad of other glossary terms you could throw at any argument. However, without leaps of faith, no progress would be made as we’re all building on theory here.

Site Note: While I’m talking about experimentation and proof, I’m still chipping away at my SEO Ranking Factors project (albeit slower than I like) and I’ll be willing to share some scripts for “tracking TrustRank” in the new year – dead useful stuff.

Okay, the point I’m making here is that these high trust/authority whatever you want to call them, sites are a stepping stone to greater things. I would agree with the whitehat doctrine that yes (if it’s your own domain at least) you will require links from these sources if you are to rank well in the future. We’ll look at some examples of how to rank without those links later (:

Trust needs to come before mass popularity and there are other things you may want to consider apart from just scanning websites and looking for as much green bar as possible. There are other mechanisms, which while I don’t believe Google is using to the full extent they should (even when they play around with that godamn WikiSearch – musn’t get started on that).

So looking from a Wikinomics aspect, they are less trustworthy but being on the front page of Digg, being popular in Stumble, having lots of delicious bookmarks could all be signals of trust as well as popularity (although at the moment at least, they are easier to game). I would expect, before Google can use these types of signals as strong factors of search, there will need to be more accountability (i.e. mass information empire) for user accounts. This is perhaps one of the things that could make WikiSearch work, being linked to your Google Account, Google can see if you use Gmail, search, docs, video, blogger, analytics, the list goes on – it’s going to be much harder to create “fake” accounts to boost your popularity.

Domain age and link profiles
Domain age definitely has its foot in the door in terms of ranking, however having an old domain doesn’t give you a laminated backstage pass to Google rankings. The most sense you’re going to get out of looking at domain age comes with overlaying it with a link growth profile, which is essentially the time aspect of your link building operation.

Your natural link growth should have an obvious logical curve when averaged out, probably something like this:

Which roughly shows that during a natural (normalised) organic growth, the amount of links you gain per day/month/week will increase (your link velocity goes up). This is an effect of natural link growth, discovery and more visitors to your site. Even if you excuse my horrific graph drawing skills, the graph is pretty simplified.

How does this fit into link growth then?
I’ll be bold and make a couple of statements:

1) When you have established trust, even the crappiest of crap links will help you rank (proof to come)

2) The more trustage (that’s my new term for trust over time (age)) the greater “buffer” you have for building links quickly

Which also brings us to two conclusions:

3) Straying outside of this “buffer zone” (i.e. 15,000 low quality new links on week 1) can you see penalised.

4) If you’ve got great trust you can really improve your rankings just by hammering any crap links you like at the site.

So, going along with my crap-o-matic graphs:

As I’ve crudely tried to demonstrate in graphical form, your “buffer zone” for links increases almost on a log scale, along with your natural links. Once you’ve established a nice domain authority, it’s pretty much free game with links, within reason.

I s’pose you’re going to want some proof for all these wild claims, aren’t you?

Can incoming links harm your website?
The logical answer to this would be “no”. Why would Google have a system in place that penalises you for bad incoming links? If Google did this, they would actually make their job of ranking decent pages much harder, with SEOs focusing in damaging the competition, rather than working on their own sites. It would be a nightmare, with a whole sub-economy of competitor disruption springing up.

That’s the logical answer. Unfortunately, the correct answer is yes. I’ll say it again for the scan readers:

It is possible to damage the rankings of other websites with incoming links

Quote me if you like.

Now by “bad links” I don’t mean the local blackhat viagra site linking to you, that will most likely have absolutely no effect whatsoever. Those kind of sites which Google class “bad neighbourhood” can’t spread their filth by just linking to you, let’s be clear on that. You’re more at risk if someone tricks you into linking to a bad site with some kind of Jedi mind trick.

There’s two ways I’ve seen websites rankings damaged by incoming links:

1) Hopefully this one is obvious. I experienced this myself after registering a new domain, putting a site up 2 days later – which ranked great for the first couple of weeks. Then, well.. I “accidently” built 15,000 links to it in a single day. Whoops. I never saw that site in the top 100 again.

2) There is a reliable method to knocking pages out of the index, which I’ve done (only once) and seen others do many, many times. Basically, you’re not using “bad” links as such, by this I mean not from dodgy/blackhat or banned sites, they are links from normal sites. If for instance, you find a sub-page of a website ranking for a term, say “elvis t-shirts” (this is a random term, I don’t even know what the SERPs are for this term) with 500 incoming links to that page. If you get some nice scripts and programs (I won’t open Pandora’s Box here – if you know what I’m talking about then great) and drop 50,000 links over a 2 week period with the anchor text “buy viagra”, you’ll find quite magically you have totally screwed Google’s relevancy for that page.

I’ve seen pages absolutely destroyed by this technique, going from 1st page to not ranking in the top 500 – inside of a week. Pretty powerful stuff. You’ll struggle with root domains (homepages) but sub-pages can drop like flies without too much problem. Obviously, the younger the site the easier this technique is to achieve.

You said you could just rank with shoddy links?
Absolutely true. Once you’ve got domain authority, it’s pretty easy to rank with any type of link you can get your hands on, which means blackhat scripts and programs come in very useful. To see this in effect, all you have to do is keep your eye on the blackhat SERPs. “Buy Viagra” is always a good search term to see what the BHs are up to. It is pretty common to see Bebo pages, Don’t Stay In pages – or the myriad of other authorative domains with User Generated Content rank in the top 10 for “Buy Viagra”. If you check out the backlink profiles of these pages you will see, surpise, surprise, they are utter crap low qualtiy links.

The domains already have trust and authority – all the need is popularity to rank.

Trust & Popularity are two totally different signals.

Which does your site need?

We have learnt:

1) You can damage sites with incoming links

2) Trust & Authority are two totally different things – Don’t just clump it all in as “PageRank”

3) You can rank pages on authority domains with pure crap spam links (:

Good night.

Posted in Google, Research & Analytics, Search Engine Optimisation | 18 Comments »

DoFollow Blogs & Forums

Saturday, August 16th, 2008

My original post for DoFollow blogs was getting a little outdated and crusty, so I’ve spruced it up.

The DoFollow Search Engine, was definitely the way to go – so forget the PDF list. I’ve updated the search engine with about 100 forums which have DoFollow enabled for forums. So you can now use this search engine to search for relevant places to drop links to your sites on blogs or forums.

Also, if you know of any DoFollow blogs or forums that aren’t listed – you can contribute them – let’s build up a good database! (:


Contribute to the Dofollow Search Engine

Or, go straight to the action and try out the DoFollow Search Engine

Posted in Search Engine Optimisation, Social Marketing, Splogs | 16 Comments »

How To Make Money With An Automated Blog & AutoStumble

Wednesday, August 13th, 2008

Welcome to another “how to” post. If you follow the recipe here, you’ll be onto Stage 3 = Profit in no time. This ties quite nicely in with the Blackhat SEO Tools post and the AutoStumble post, for those who haven’t read them. It’s a little blackhat, but nothing to lose sleep over (hah, as if!) and this is really, really, reaaallyy easy stuff. Sitting comfortably? Let us begin..

What is the end goal?

The end-game of this post is to have a fully automated blog, which generates shitloads of traffic via StumbleUpon, referrals and Google Blogsearch. In the process, you’ll also gain loads of subscribers and generate some nice easy revenue. Once it’s built, the entire thing is just about hands free.

What you need before you begin…#
To complete this project you will need:

1) Nice clean installation of WordPress

2) The Digerati Blackhat SEO Tool Set

3) A registered copy of AutoStumble

Lets get started
I’m going assume you know the basics of setting up a WordPress blog. If not you can get more detail from Making Money With a Video Blog or if you’re totally new, check out the official WordPress documentation. So yea, if you’re that new, please RTFM.

Once you’ve got your WordPress blog installed and running, do the basics such as setting the permalinks to be the post title, so you get those little extra keywords in the URL. You’ll also need to find yourself a theme. As discussed in Making Money With a Video Blog, the layout is really, really important to get clicks on your ads. You could start with a template like ProSense, although I’ve found the click through ratio to be pretty low, but at least it’s quick. Ideally, have a hunt around so you meet the criteria of showing your content above the fold, centrally and having your ads nicely surrounded and blended in. The key here is to experiment and see what works well for you.

Plugins FTW
There’s a whole crapload of plugins that will make your life a lot easier. We’ll start off with the important one, FeedWordpress, which is part of the Digerati Blackhat SEO Tool Set if you don’t already have it.

Upload the feedwordpress folder, as usual to your wp-content/plugins directory. You’ll need to remove the 2 files from the feedwordpress “Magpie” subfolder, and put these into your “wp-includes” directory, which will overwrite some default WordPress files too. Don’t miss that step…

Once you’re installed and you’ve activated the plugin via your WordPress dashboard, you’ll have a new option on your main navigation.



Just like that. So give that a click and then go into the “Syndication Options” menu. From here you’ll be able to configure FeedWordpress to do your bidding.

You should get an option screen like this:


So lets run through these options.

1) The first thing you want to change is the “Check For New Posts” option. You’ll want to set this to “automatic”. This will go sniff your RSS feeds at an interval you specify to grab new content. You can leave it on every 10 minutes for now.

2) Make sure the next 3 boxes are checked, this will keep your feed information bang up to date.

3) You should set syndicated posts to be published immediately. This will allow you to get your content live ASAP, which is always a plus.

4) Pemalinks. This is basically when somebody clicks on the post, do they go to the original website that you er… Borrowed? The content from, or do they go so a scraped version on your site. For this example (which I’ll give the gonadless among you an ethical loophole for later), set it to “this website”.

5) I always set FeedWordpress to create new categories. I never display categories in the menu, but it gives the post a few more keywords and a bit more relevance for search. So, if someone else has gone to the effort of writing a tag, it would just be wasteful of you not to use it!

Okay, that’s set up… What exactly are we scraping?
To be honest, I’m not a big fan of people scraping content that people have sweated over. However, one thing I don’t mind doing is thieving from thieves.

You’re on the hunt for “disposable” content – generally not text based. Think along the lines of Flash games, funny videos, funny pictures, hypnomagical-optical-illusions – that kind of thing. The Internet is awash with blogs that showcase this stuff. Check out Google blogsearch and try a search like funny pictures blog. There’s hundreds of the leeching bastards showcasing other peoples pictures, videos, games and hypnomagical-optical-illusions for their website. They can hardly call it “their” content. With this ethical pebble tossed aside, we can go and grab some content.

There’s loads of ways you can hunt down potential content. You’re on the lookout for RSS feeds with this rich media. So you could try; Google Blogsearch, Technorati, MyBlogLog – basically any site that lets you search the blogosphere.

Once you’ve got the location of about a dozen or so RSS feeds, you can go to your Syndication menu again and “add a new syndicated site”. Simple matter, paste in the RSS feed location and hit syndicate. Once you’ve added them all, it “update”. Boom, shake the room, you’ve probably got a couple of hundred “new posts”.

New posts, no traffic
You want to of course, set up your WordPress RSS. Something like Feedburner is dead easy to set up and will get Google interested off the bat. Make sure you have a nice big RSS button and offer e-mail subscription (Feedburner does this) for those who don’t have a clue what the hell RSS is.

The cool thing about services like Google Blogsearch is that they’re pretty much chronologically sorted. So as long as you have a steady stream of posts, you’re guaranteed at least a trickle of traffic from long-tail searches.

Hot potato, grab and switch
If you really want to get some serious traffic, you’re going to need some “pillar” posts – content that you know for sure is strong. The easiest way to do this is to keep an eye on sites like Digg and Reddit. Check out on there what is going hot, what’s new and what’s viral. Probably the easiest thing to do is subscribe to the Digg Offbeat / Comedy RSS. This will give you constant updates on what’s upcoming.

Due to the differences in the types of people, there doesn’t tend to be as much overlap between hubs such as Digg, Reddit & StumbleUpon as you might first think. I’ve seen things go viral on Reddit and then take two or three days to make it onto the frontpage of Digg. So, you can grab content that’s going hot from one of these hubs; your proverbial “hot potato” and put in front of the nose of another audience.

Here’s where AutoStumble comes in
This is probably the easiest way to use AutoStumble. Grab your hot potato content from Digg and do a manual post on your blog. Submit this page to StumbleUpon.

AutoStumble costs £20 and is a desktop application, which allows you to automatically pool hundreds of StumbleUpon votes with other users. I.e., this is your quick way of getting your content to go viral on StumbleUpon. If you purchase and download AutoStumble, it is simple a matter of pasting in the URL you want to go viral on StumbleUpon and hitting “AutoStumble”.

A few hundred votes later. Voila. You have traffic.

The value of StumbleUpon traffic
1) The most I’ve had is just over 70,000 unique visitors over a 3 day spike from StumbleUpon. So firstly, you can generate a fairly decent bit of green from your initial CPM ad impressions and clicks on things like Adsense. (StumbleUpon users don’t tend to be as picky about clicking on ads as Diggers).

2) With this volume of traffic, you’ll likely find a few people who really like your content. You’ll get RSS / Email subscribers who will be a permanent addition to your monthly traffic (and revenue).

3) A lot of these social sites are populated with pretty tech savvy people. A lot of these people run their own blogs, forums, websites – or at least add content somewhere themselves on the web. If you get 10,000 visitors from StumbleUpon, you can expect a decent amount of lovely natural links from around the web. Links mean better website authority, better rankings, better traffic and better revenue. The value for me at least, is really long-term.

Making things easy for yourself
You’ll probably want to install some extra plugins such as:

  • WordPress Automatic Update – This will update your WordPress installation as well as plugins. Generally, it will save you a lot of time.
  • Clean Archives Reloaded – I use there on my archive page. It’s a nice way to layout all of your blog posts with clean anchortext to improve relevance with some internal linking.
  • Sitemap Generator – I don’t really bother with Sitemaps, but for those who do – saves you generating one from scratch.

Don’t forget, if you’re going to be switching content onto platforms like Digg or Reddit, make sure you have their native vote button included in the post! You want to make it as easy as possible to grab all of the votes you can. Again, personally – I don’t bother with the generic social bookmarking plugins for WordPress, as I find nobody actually seems to use them.

Oh, and before anyone chirps in trying to be clever saying “(sniffle) won’t duplicate content be an issue?” No! it won’t, fucktard! Get back in your hole. Aside from the dupe content filters being primarily built on shit, you’ll be posting mostly rich media. Google’s not too great at working out the exact content of pictures and videos… Yet. Yes, it will probably change one day in the future, and we’ll all look back on this post and laugh..At the moment, it’s not something they do well, so, well…. Ching..Ching.

Taking it one step further
This whole project should take you less than 30 minutes, from sitting down at your computer to having a fully automated blog posting and promotion system set up. If you like the idea, it would be an idea to package everything I’ve mentioned here together into your own custom install file, so you can deploy new sites in under 15minutes.

If you’re going to do this, you may as well make your cookie cutter solution as good as it can be. Hopefully, if you’re thinking down the right road you can come up with some of your own ideas to improve on these techniques (there are loads).

Why not look at only showing social voting buttons, from sites you know that your visitors actually use? Here’s some code.

Enjoy.

Posted in Adsense, Advertising, Black Hat, Blogging, Google, Grey Hat, Search Engine Optimisation, Social Marketing, Splogs, Viral Marketing | 33 Comments »

Blackhat SEO Tools & Scripts – The Digerati Blackbox

Thursday, June 12th, 2008

buenos dias, friends!

I’ve put together a little treat for all of you budding and new blackhats out there. I got quite annoyed this week with the whitehattards on Sphinn.

Those of you who actually know me, will know I believe whitehat stuff is very important to building a web business. However, I also believe there is strong case for at least experimenting with gray/blackhat (whatever you want to call it). There are some markets you literally cannot touch without getting off your rainbow shitting whitehat unicorn of light. Unfortunately, there’s a lot of, erm, “dedicated” whitehats out there that refuse to even learn what blackhat is. I’d like to take this opportunity to shed some myths (AKA venting) about blackhat. For those of you who don’t enjoy reading pissed off (I believe the whitehat word for pissed is “snarky” – Thanks Matt.C), feel free to skip down the page to the goodies.

Things that whitehattards believe to be true:

1. That “on page” SEO is some uber-skill which takes years to learn.

False. If you actually get a good web developer, the chances are he (or she!) will make a decent crawable website. You might be able to help them out with some keyword research to help target title/header tags, or give them a little advice on PR sculpting for large sites with nofollow. Good internal linking structures are pretty commonly well known – at least with the web developers I know. If any pure whitehat starts talking about precise keyword density, just laugh in their face.

2. The main thing about SEO is creating good content.
Good content gets links, yes. Well done. Why are you doing SEO when so many crimes are going unsolved around the world? Good content is important for whitehat site, yes. However, good content is not bloody SEO! How do I know this? Would you bother writing good content if search engines didn’t exist? Yes, you would. Therefore it is actually a component of web design, not SEO!

3. There’s no point in blackhat, you’ll just get banned.
This little corker comes from two types of people, normally from people who have never tried blackhat (glad they’re qualified to comment, why not go give a lecture on brain surgery while you’re at it). Or, secondly, people who have tried some very, very, basic blackhat and done it badly and left footprints like a crack-addicted yeti storming around the web. I know of many blackhat sites that have enjoyed top positions for years without getting caught for competitive key phrases those whitehats couldn’t touch with a NASA sized hard drive full of great content.

4. I’m a good whitehat SEO because “I know” where to get links from
Aww now, c’mon. Not really a “core” SEO skill is it? I’ll give it to you, that it helps. I think what you’re trying to say is “I understand how the web works and where it is possible to drop links” or “I use social news/community sites”. I know people who have never built a link in their life and would make great whitehat SEO link builders because they spend ages writing content for blogs and taking part in Digg, Reddit, Stumble, blaahh, blahhh. At best, it’s a transferable skill.

5. Blackhat SEOs only resort to blackhat because they can’t produce good websites
This one (which I saw several times on Sphinn), just leaves my jaw dropped. Generally, blackhats are far more accomplished programmers than whitehats and can build much cleaner and more efficient websites (and a lot do) if they wish. The fact is, by scripts and automation they’ve found a way to make a decent income without burning the midnight oil writing content about their new “diamond goat hoof jewellery” niche they’ve found. This comment normally comes from whitehats who wouldn’t know a blackhat if they spammed them in the face.

There is however, advanced white hat SEO, as Eli kindly demonstrates in his painfully bastardish always right way.

Ahem. Anyway…..

The Digerati Blackbox

So, I’ve collected together a set of tools, scripts, databases and tutorials which will help the beginner blackhat find their feet. Some of the stuff is pretty good, albeit fairly basic. You should be able to make something decent if you combine some of these scripts, or strip out some of the code into your own creations.

Blackbox Contents:

Cloaking & Content Generation:

cloakgen1.zip:
This is a cloak / dynamic content generation script. To use it you simply add a small piece of code to the top of each page you wish to be cloaked. When someone accesses your page then cloakgen is run and if the user-agent suggests the visitor is a standard user then they are simply shown your standard page. However if the user-agent suggests that the visitor is a search engine then it will start doing the business. It will start by finding out what page called it, then it will open this page and find out what the most common words on the page are. Once it has worked this out then it will scrape some content about that word from wikipedia and add it with your normal page content. Each keyword will be emphasised in a random way. For example the keyword could be bold or red font etc. The final page will be output in the following way:

Title of the page in capital letters
Large title at the top of the page
Content of the website with emphasization and wiki content

padkit.zip:
PAD is the Portable Application Description, and it helps authors provide product descriptions and specifications to online sources in a standard way, using a standard data format that will allow webmasters and program librarians to automate program listings. PAD saves time for both authors and webmasters. This is what you want to use with the below databases.

yacg.zip:
You should have heard of Yet Another Content Generator (YACG). It’s a beautifully easy way to get websites up and running in minutes with mashed up scraped content.

Databases:

articles.zip:
A database of 23,770 different articles on a variety of topics.

bashquotes.zip:
This is a database of every quote on Bash.org. This huge Database has every single quote as of May 1st, 2007!

KJV_bible.zip:
The whole thing King James Bible – Old & New Testament.

medical_dictionary.sql.zip:
Over 130,000 rows of medical A-Z

Keyword Scripts:

ask-single-keyword-scraper.zip:
This script allows you to scrape a range of similar keywords to your original keyword from Ask.com.

google-single-keyword-scraper.zip:
This script will take a base keyword and then scrape similar keywords from google.

msn-live-api-scraper.zip:
This script uses php cURL to scrape search results from the MSN LIVE Search API.

overture-single-keyword-scraper.zip
Enter one base keyword and scrape similar keywords from overture.

Linkbuilding Scripts:

dity.zip:
A very easy to use (and old) multi guestbook spammer.

logscraper.zip:
Nifty little internal linker (read more about it here)

trackback.zip
Very powerful trackback poster. Trackback Solution is 100% multithreaded and very efficient at automatically locating and posting trackback links on blogs.

xml-feed-link-builder-z.zip
Very nice script to generate links from to your site from people scraping RSS.

Misc Scripts:

alexa-rank-cheater1.zip:
Automate the false increase of your Alexa rating/rank.

typo-generator-esruns.zip:
Create typos of a competitive keyword and rank easy!

Scraping:

feedwordpress.0.993.zip:
Wordpress plugin that makes scraping the easiest thing in the world.

Proxies:

proxy_url_maker.zip:
Create a list of web proxy URLs used for negative seo purposes or spam

proxygrabber.zip:
A script to download proxies from the samair proxy list site.

CAPTCHAs:

delicious.zip:
Delicious CAPTCHA broken. In Python.

smfcaptchacrack.zip:
Simple machines forums captcha breaker compiled and designed to run on Linux but portable to Windows.

Tutorials:

curl_multi_example.zip:
What it says on the tin. Examples of m-m-m-multi curl!

superbasiccurl.zip:
4 super basic tutorials on using curl/regex.

I’d like to give special thanks to all donators and people who included their stuff here:

Steve – For the majority of scripting here.
Rob – For the databases
Eli – For delicious CAPTCHA breaker
Rob – For trackback magic
Harry – For proxygrabber/linux captcha scripts

Here it is:

blackhat seo tools
Download Digerati Blackbox Toolkit (51.4Mb)



Disclaimer: I’m not offering support on any of these tools or scripts, although I might do a couple of tutorial posts on how to use them. So don’t ask me how to use them, check out the respective author’s website if you get stuck. Obviously Digerati Marketing Ltd, I, my dog, or anyone else cannot be held responsible for any type of loss or damages of any kind (even an act of God Google) if you choose to use them. At your own risk blah blah blah. Zzzzzz. Enjoy.

Posted in Black Hat, Grey Hat, Marketing Insights, Research & Analytics, Search Engine Optimisation, Social Marketing, Splogs, White Hat | 64 Comments »