Home | Archive | SEO Tools | Contact
« Previous Entries

Archive for the 'White Hat' Category

How to make a Twitter bot with no coding

Wednesday, September 23rd, 2009

As usual, lazy-man post overview:

With this post you can learn to make a Twitter bot that will automatically retweet users talking about keywords that you specify. You can achieve this with (just about) no coding whatsoever.

Why would you want to do this? Lots of reasons I guess, ranging from spammy to fairly genuine. Normally giving somebody a ReTweet is enough to make them follow you and it keeps your profile active, so you can semi-automate accounts and use it as an aide for making connections. That or you can spam the sh*t out of Twitter, whatever takes your fancy really.

Here we go.

Step 1: Make your Twitter Bot account
Head over to Twitter.com and create a new account for your bot. Shouldn’t really need much help at this stage.. Try to pick a nice name and cute avatar. Or something.

Step 2: Find conversations you want to Retweet
Okay, we’ve got our Twitter account and we’re going to need to scan twitter for conversations to possibly retweet. To do this, we’re going to use Twitter Search. In this example, we’re going to search for “SEO Tips”, but to stop our bot Retweeting itself you want to add a negative keyword of your botname. So search for SEO Tips -botname, likely this:

Twitter Bot




So my bot is called “DigeratiTestBot”. Hit search now, muffin.



Step 3: Getting the feed
The next thing you need to do is get the feed results, which isn’t quite as simple as you’d think you see. Twitter being a bit of a prude doesn’t like bots and services like Feedburner or Pipes interacting with it, so you’re going to need to repurpose the feed or it’s game over for you.

After you’ve done your search you need to get the feed location (top right) so copy the URL of the “Feed for this query”

Twitter Bot




Store that in a safe place, we’ll need it in a second.



Step 4: Making the feed accessible
Okay, so there’s a teeny-tiny bit of code, but this is all, I promise! You’re going to need to republish the feed so it can be accessed later on, but don’t worry – it’s a piece of cake. All we’re going to do is screen scrape the whole feed results page onto our own server.

Make a file called “myfeed.php” and put this in it:


The only bit you need to change is:

“$url = “http://search.twitter.com/search.atom?q=seo+tips+-yourbotname”;”

which needs to be replaced with whatever your Twitter RSS feed that we carefully saved and stored in a safe place earlier. If you’ve already lost that URL, please proceed back to Step 3 and consider yourself a fail.

So, having completed this and uploaded your myfeed.php to your domain, you can now access the real-time Twitter results feed by accessing http://www.yourdomain.com/myfeed.php.

Step 5: Yahoo Pipes!
Now comes the fun bit, we’re going to set up most of the mechanism for our bot in Yahoo Pipes. You’ll need a Yahoo account, so if you don’t have one, get one and login and click “Create a Pipe” at the top of the screen.

This will give you a blank canvas, so let’s MacGyver us up a god damn Twitter Bot!

Add “Fetch Feed” block from “Sources”
Then in the “URL” field, enter the URL of the feed we repurposed, http://www.yourdomain.com/myfeed.php.

Twitter Bot




Add “Filter” block from “Operators”
Leave the settings as “Block” and “all” then add the following rules:
item.title CONTAINS RT.*RT
item.title CONTAINS @
item.twitter:lang DOES NOT CONTAIN EN


(You click the little green + to add more rules). Once you’ve done that drag a line between the bottom of the “Feed Fetch” box and the top of the “Filter” box to connect them. Hey presto.

Twitter Bot




Add “Loop” block from “Operators”

Add a “String Builder” from “String” and drag in ONTO the “Loop” block you just added


In the String Builder block you just put inside the Loop block, add these 3 items:
item.author.uri
item.y:published.year
item.content.content

Check the radio box of “assign results to” and change this to item.title

Great, now drag a connection between your Filter and Loop blocks. Should look like this now:

Twitter Bot




Add “Regex” block from “Operators”
Add these two rules:
item.title REPLACE http://twitter.com/ WITH RT @
item.title REPLACE 2009 WITH (space character)

Extra points for anyone who writes “(space character)” instead of using a space. Also don’t miss the trailing slash from twitter.com/



Drag a connection between Loop Block and Regex Block, then a connection between Regex and Pipe Output blocks.

Finished! Should look something like this:

Twitter Bot




All you need to do now is Save your pipe (name it whatever you like) and Run Pipe (at the top of the screen).

Once you run your pipe, you’ll get an output screen something like this:

Twitter Bot




What you need to do here is save the URL of your pipe’s RSS feed and keep it in a safe place. If you didn’t lose your RSS feed from Step 3, then I’d suggest keeping it in the same place as that.



Step 6: TwitterFeed
Almost there, comrades. All we need to do now is whack our feed into our TwitterBot account, which is made really easy with TwitterFeed.com. Get yourself over there and sign up for an account.

To set up your bot in TwitterFeed:

1) I suggest not using oauth, as it will make it easer to use multiple Twitter accounts. Click the “Having Oauth Problems?” link and enter the username and password for your TwitterBot account and hit test account details.

2) Name your feed whatever you like and then enter the URL of your Yahoo Pipes RSS that we carefully saved earlier, then hit “test feed”.

3) Important: Click “Advanced Settings” we need to change some stuff here:

Post Frequency: Every 30mins
Updates at a time: 5
Post Content: Title Only
Post Link: No (uncheck)

Then hit “Create Feed”

Twitter Bot




All done!

Have fun and please, don’t buy anything from those losers who are peddling $20 “automate this” Twitter scripts. If you really need to do it, just make it yourself or if you don’t know how leave a comment here and I’ll show you how.

Bosh.

Posted in Advertising, Black Hat, Blogging, Grey Hat, Scripting, Social Marketing, White Hat | 115 Comments »

Blogs Worth Reading

Monday, December 15th, 2008

I’ve never done a round-up of the blogs I read before, which I guess is a bit selfish. So, in no particular order (and this isn’t a complete list) some of my favourite blogs, if you’re looking for some inspiration.

Dark SEO Programming is run by Harry. As he puts it, “SEO Tools. I make ‘em”. A great guy if you need help with coding and somewhat of a captcha guru, with a sense of humour. Definitely worth keeping up with. I wouldn’t be surprised if this guy starts making big Google waves in the next few years.

Ask Apache is a blog I absolutely love. Great, detailed tutorials on script optimisation, advanced SEO and mod_rewrite. AskApache’s blog posts are the kind of ones that live in your bookmarks, rather than your RSS Reader.

Andrew Girdwood is a great chap from BigMouthMedia I met last year (although I very much doubt he remembers that). Andrew seems to be a vigilante web bug hunter. What I like about his blog is that he is usually the first to find weird things with Google that are going down. This usually gets my brain rolling in the right direction of my next nefarious plan. ^_^

Blackhat SEO Blog run by busin3ss is always worth checking out. He was even kind enough to give me a pre-release copy of YACG mass installer to review (it’s coming soon – I’m still playing!). Apart from his excellent tools, his blog features the darker side of link building, which of course, interests me greatly.

Kooshy is a blog run by a guy I know, who.. Well I think he wants to remain anonymous (at least a little). He’s just got started again after closing down his last blog and moving Internet personas (doesn’t the mystery just rivet you?). Anyway, get in early, I think we can expect some good stuff from here. He’s already done a cool post on Pimpin’ Duplicate Content For Links.

Jon Waraas is run by.. Can you guess? Jon has something that a lot of even really smart Internet entrepreneurs are missing, good old fashioned elbow grease. This guy is a workaholic and it pays off in a big way. Apart from time saving posts on loads of different ways to monetise your site, build backlinks and flush out your competitors I get quite a lot of inspiration for his constant stream of effort and ideas. I could definitely take a leaf out of his work ethic book.

Blue Hat SEO is becoming one of the usual suspects really. If you’re here, you probably already know about Eli. Being part of my “let’s only do a post every few months club”, I love Eli’s blog because there is absolutely no fluff. He gets straight down to the business of overthrowing Wikipedia, exploiting social media and answering specific SEO questions. You’ll struggle to find higher quality out there.

SEO Book is probably the most “famous” blog I’m going to mention here. Aaron was off at a disadvantage, because to be honest, I thought he was a massive waste of space for quite a while. (I guess that’s what happens when you take your SEO youth on Sitepoint listening to the people with xx,xxx posts on there). I bought his SEO Book and for me, at least, it was way too fluffy. I’m pleased he’s started an SEO training service now as it represents much better value. I’m sure he was making a lot of money from his SEO Book, but perhaps milked it too long (like I probably would have). Anyway, I kept with his blog and I’ve been impressed with his attitude and posts. He’s done some really cool stuff, like the SEO Mindmap and more recently, a keyword strategy flowchart which would be useful for those looking to a more structured search approach. He’s also written about algorithm weightings for different types of keywords and of course has some useful SEO Tools.

Slightly Shady SEO – Great name, great blog. Although XMCP will probably take it as an insult, I’ve always regarded Slightly Shady as the blog most similar to mine on this list. Maybe it’s because I wish I’d written some of the posts he has, before he did, hehe. Again, a no BS approach to effective SEO, whether he’s writing about Google’s User Data Empire, hiding from it or site automation it’s all gravy.

The Google Cache is a great blog for analytical approaches to SEO. There are some awesome posts on Advanced Whitehat SEO and using proxies with search position trackers. I like.

SEOcracy is run by a lovely database overlord called Rob. Rob’s a cool guy, he was kind enough to donate some databases to include in the Digerati Blackbox a while back. Most of his databases are stashed away in his content club now, which is well worth a look in. He’s also done some enlightening posts on keyword research, stuffing website inputs and Google Hacking.

This is all I’ve got time for now, apologies if I’ve missed you. There may be a Part II in the near future.

Posted in Affiliate Marketing, Approved Services, Black Hat, Blogging, Digerati News, Google, Grey Hat, Marketing Insights, Research & Analytics, Search Engine Optimisation, Social Marketing, Splogs, Viral Marketing, White Hat, Yahoo | 7 Comments »

1,147 DoFollow Blogs & Forums

Tuesday, August 19th, 2008

I thought such a big update was worth a post. Pretty happy with the DoFollow search engine now – well over 1,000 blogs & forums in the index. So, get link building…

Or, if you’re smart write an app to interface with the DoFollow search engine and do it all for you (:

Oh….My…..



Posted in Blogging, Community Sites, Digerati News, Google, Grey Hat, White Hat | 12 Comments »

Blackhat SEO Tools & Scripts – The Digerati Blackbox

Thursday, June 12th, 2008

buenos dias, friends!

I’ve put together a little treat for all of you budding and new blackhats out there. I got quite annoyed this week with the whitehattards on Sphinn.

Those of you who actually know me, will know I believe whitehat stuff is very important to building a web business. However, I also believe there is strong case for at least experimenting with gray/blackhat (whatever you want to call it). There are some markets you literally cannot touch without getting off your rainbow shitting whitehat unicorn of light. Unfortunately, there’s a lot of, erm, “dedicated” whitehats out there that refuse to even learn what blackhat is. I’d like to take this opportunity to shed some myths (AKA venting) about blackhat. For those of you who don’t enjoy reading pissed off (I believe the whitehat word for pissed is “snarky” – Thanks Matt.C), feel free to skip down the page to the goodies.

Things that whitehattards believe to be true:

1. That “on page” SEO is some uber-skill which takes years to learn.

False. If you actually get a good web developer, the chances are he (or she!) will make a decent crawable website. You might be able to help them out with some keyword research to help target title/header tags, or give them a little advice on PR sculpting for large sites with nofollow. Good internal linking structures are pretty commonly well known – at least with the web developers I know. If any pure whitehat starts talking about precise keyword density, just laugh in their face.

2. The main thing about SEO is creating good content.
Good content gets links, yes. Well done. Why are you doing SEO when so many crimes are going unsolved around the world? Good content is important for whitehat site, yes. However, good content is not bloody SEO! How do I know this? Would you bother writing good content if search engines didn’t exist? Yes, you would. Therefore it is actually a component of web design, not SEO!

3. There’s no point in blackhat, you’ll just get banned.
This little corker comes from two types of people, normally from people who have never tried blackhat (glad they’re qualified to comment, why not go give a lecture on brain surgery while you’re at it). Or, secondly, people who have tried some very, very, basic blackhat and done it badly and left footprints like a crack-addicted yeti storming around the web. I know of many blackhat sites that have enjoyed top positions for years without getting caught for competitive key phrases those whitehats couldn’t touch with a NASA sized hard drive full of great content.

4. I’m a good whitehat SEO because “I know” where to get links from
Aww now, c’mon. Not really a “core” SEO skill is it? I’ll give it to you, that it helps. I think what you’re trying to say is “I understand how the web works and where it is possible to drop links” or “I use social news/community sites”. I know people who have never built a link in their life and would make great whitehat SEO link builders because they spend ages writing content for blogs and taking part in Digg, Reddit, Stumble, blaahh, blahhh. At best, it’s a transferable skill.

5. Blackhat SEOs only resort to blackhat because they can’t produce good websites
This one (which I saw several times on Sphinn), just leaves my jaw dropped. Generally, blackhats are far more accomplished programmers than whitehats and can build much cleaner and more efficient websites (and a lot do) if they wish. The fact is, by scripts and automation they’ve found a way to make a decent income without burning the midnight oil writing content about their new “diamond goat hoof jewellery” niche they’ve found. This comment normally comes from whitehats who wouldn’t know a blackhat if they spammed them in the face.

There is however, advanced white hat SEO, as Eli kindly demonstrates in his painfully bastardish always right way.

Ahem. Anyway…..

The Digerati Blackbox

So, I’ve collected together a set of tools, scripts, databases and tutorials which will help the beginner blackhat find their feet. Some of the stuff is pretty good, albeit fairly basic. You should be able to make something decent if you combine some of these scripts, or strip out some of the code into your own creations.

Blackbox Contents:

Cloaking & Content Generation:

cloakgen1.zip:
This is a cloak / dynamic content generation script. To use it you simply add a small piece of code to the top of each page you wish to be cloaked. When someone accesses your page then cloakgen is run and if the user-agent suggests the visitor is a standard user then they are simply shown your standard page. However if the user-agent suggests that the visitor is a search engine then it will start doing the business. It will start by finding out what page called it, then it will open this page and find out what the most common words on the page are. Once it has worked this out then it will scrape some content about that word from wikipedia and add it with your normal page content. Each keyword will be emphasised in a random way. For example the keyword could be bold or red font etc. The final page will be output in the following way:

Title of the page in capital letters
Large title at the top of the page
Content of the website with emphasization and wiki content

padkit.zip:
PAD is the Portable Application Description, and it helps authors provide product descriptions and specifications to online sources in a standard way, using a standard data format that will allow webmasters and program librarians to automate program listings. PAD saves time for both authors and webmasters. This is what you want to use with the below databases.

yacg.zip:
You should have heard of Yet Another Content Generator (YACG). It’s a beautifully easy way to get websites up and running in minutes with mashed up scraped content.

Databases:

articles.zip:
A database of 23,770 different articles on a variety of topics.

bashquotes.zip:
This is a database of every quote on Bash.org. This huge Database has every single quote as of May 1st, 2007!

KJV_bible.zip:
The whole thing King James Bible – Old & New Testament.

medical_dictionary.sql.zip:
Over 130,000 rows of medical A-Z

Keyword Scripts:

ask-single-keyword-scraper.zip:
This script allows you to scrape a range of similar keywords to your original keyword from Ask.com.

google-single-keyword-scraper.zip:
This script will take a base keyword and then scrape similar keywords from google.

msn-live-api-scraper.zip:
This script uses php cURL to scrape search results from the MSN LIVE Search API.

overture-single-keyword-scraper.zip
Enter one base keyword and scrape similar keywords from overture.

Linkbuilding Scripts:

dity.zip:
A very easy to use (and old) multi guestbook spammer.

logscraper.zip:
Nifty little internal linker (read more about it here)

trackback.zip
Very powerful trackback poster. Trackback Solution is 100% multithreaded and very efficient at automatically locating and posting trackback links on blogs.

xml-feed-link-builder-z.zip
Very nice script to generate links from to your site from people scraping RSS.

Misc Scripts:

alexa-rank-cheater1.zip:
Automate the false increase of your Alexa rating/rank.

typo-generator-esruns.zip:
Create typos of a competitive keyword and rank easy!

Scraping:

feedwordpress.0.993.zip:
Wordpress plugin that makes scraping the easiest thing in the world.

Proxies:

proxy_url_maker.zip:
Create a list of web proxy URLs used for negative seo purposes or spam

proxygrabber.zip:
A script to download proxies from the samair proxy list site.

CAPTCHAs:

delicious.zip:
Delicious CAPTCHA broken. In Python.

smfcaptchacrack.zip:
Simple machines forums captcha breaker compiled and designed to run on Linux but portable to Windows.

Tutorials:

curl_multi_example.zip:
What it says on the tin. Examples of m-m-m-multi curl!

superbasiccurl.zip:
4 super basic tutorials on using curl/regex.

I’d like to give special thanks to all donators and people who included their stuff here:

Steve – For the majority of scripting here.
Rob – For the databases
Eli – For delicious CAPTCHA breaker
Rob – For trackback magic
Harry – For proxygrabber/linux captcha scripts

Here it is:

blackhat seo tools
Download Digerati Blackbox Toolkit (51.4Mb)



Disclaimer: I’m not offering support on any of these tools or scripts, although I might do a couple of tutorial posts on how to use them. So don’t ask me how to use them, check out the respective author’s website if you get stuck. Obviously Digerati Marketing Ltd, I, my dog, or anyone else cannot be held responsible for any type of loss or damages of any kind (even an act of God Google) if you choose to use them. At your own risk blah blah blah. Zzzzzz. Enjoy.

Posted in Black Hat, Grey Hat, Marketing Insights, Research & Analytics, Search Engine Optimisation, Social Marketing, Splogs, White Hat | 64 Comments »

SEO Ranking Factors

Saturday, May 31st, 2008

Right, lets kick this thing in the nuts. Wouldn’t it be great if you could have a decent list of SEO Ranking Factors and more specifically, tell me exactly what you need to rank for a key phrase?

Well, SEOMoz went and done this.

You’ve probably all seen it before, the famous SEOMoz Search Ranking Factors, the highly regarded opinions of 37 leaders of search spread over a bunch of questions. It sounds slick, it looks cool and it’s a great introduction to SEO. There is, however, a rather major problem. None of them pissing agree! 37 leaders in search, closed ended questions, yet almost ALL of the answers have only “average agreement”, just look at the pie charts at the end, there is massive dispute between the correct answer.

I find this interesting. It leaves two possibilities

1) SEOMoz’s questions are flawed and there is no “correct” answer – this kind of kills the whole point of the project.

2) If there is a “correct” answer, then it would seem that 25%-50% of “leading people in search” don’t know WTF they are talking about.

Now before I continue, I’m not going to claim I have all the answers, far, far from it. I do some stuff and that stuff works well for me. The other thing I would like to point out is that I actually really like the SEOMoz blog and I think they provide extremely high quality content in high frequency, which is bloody hard to do. So please no flaming when I seem to be bashing their hard work, I’m simply pointing out a few things rather crudely. Oh, they’re nice people too, Jane is very polite when I stalk her on Facebook IM.

Anyway, back to slating. I think it is very hard to give quality answers to questions such as, how does page update frequency effect ranking? From my experience, I’ve found Google quite adaptive in knowing, based on my search query, whether it should serve me a “fresh” page or one that’s collecting dust. Eli from BlueHatSEO has also made some convincing arguments that the “optimum” update frequency of a page depends on your sector/niche.

Also, these things change. Regulary. Those clever beardies at Google are playing with those knobs and dials all the time. Bastards.

Okay, I now hate you for slating SEOMoz, do you have anything useful to say?
Maybe? Maybe not. As I mentioned in my last post, I’m going to talk about some projects I’m working on at the moment and one of these is specifically aimed at getting some SEO Ranking Factors answers.

I could of course just give what I believe to be the “correct” answers to the SEO Ranking Factors questions, but like everyone else, I’d be limited to my own SEO experience. We need more data, more testing, more evidence.

There’s loads of little tools floating around the net that will tell you little things like, if you have duplicate meta descriptions, your “keyword density” (hah), how many links you have, all that stuff. Then you’ll get some really helpful advice like “ShitBOT has detected your keyword only 3.22% on this page, you should mention your keyword 4.292255% for optimum Googleness”. Yes, well. Time to fuck off ShitBOT. These tools are kind of fragmented over the net, so it would take ages to run all 101 to build up a complete “profile” of your website, which really… Wouldn’t tell you all that much. It wouldn’t tell you much because you’re only looking at your own website, your own ripples in the pond. You need to zoom out a bit, get in a ship and sail back a bit, then maybe put your ship in a shuttle, blast off until you can see the entire ocean.

Well, crap. It all looks different from here..

Creating a Technological Terror
I can’t do this project alone. Fortunately, one of the smartest SEO people I know moved all the way across the country to my fine city and is going to help.

Here we go….

1) Enter the keyword you would like to rank for.

2) We will grab the top 50 sites in Google for this search term.

2) i) First of all, we will do a basic profile of these sites, very similar, but a bit more depth than the data SEOQuake will give you. So things like domain age, number of sites linking to domain, how these links are spread within the site, page titles, amount of content, update frequency, PageRank etc. We’ll also dig a bit deeper and take titles and content from pages that rank for these key phrases and store them for later.

2) ii) The real work begins here. For each one of these sites that rank, we are going to look at the second tier, which I don’t see many people doing. We are going to analyse all of the types of sites that link to these sites that rank well. This will involve: Doing the basics, such as looking at their vital stats, so their PR, links, age of domain, TLD and indexed pages.

Then we’re going to take this a step further. We are going to be scanning for footprints to work out the type of link. This means, is it an image link? Is it a link from a known social news site like Digg or Reddit? Is it a link from a social bookmarking site like StumbleUpon or Delicious? Is it a link from a blog? Is it a link from a forum? A known news site? Is it a link from a generic content page? If so, lets use some language processing and try and determine if it’s a link from a related content page, or a random ringtones page. Cache all of this data.

3) We have a huge amount of data now, we need to process it. Ranking for the keyterm casino, lets put it onto a graph showing their actual ranking for this keyterm vs their on page vital stats. Lets see the ranking vs the types of links they have. Lets see how the sites rank vs the amount of links, the age of links etc.etc…


4) We can take this processing to any level needed. Lets pool together all the data we have of the 50 sites and take averages. What do they have in common for this search term? Are these common ranking factors shared between totally different niches and keywords?

This is the type of information that I think I know. I think it would be valuable to know the information I know (=

So I guess you can expect a lot of playing with the Google Charts API, scatter graphs showing link velocity against domain age and total links and all that shit.


You get the idea.

There’s actually all other kind of secondary analysis that can be pumped into this data. For instance, even though it’s a kind of made up term, I think “TrustRank” has some sauce behind it. (There’s a good PDF on TrustRank here). Lets think of it in very, very simple, non-mathematical terms for a moment.

One fairly basic rule of thumb for the web can be that a trusted (“good”) site will generally not link to a “bad” (spam, malware, crap) site. It makes sense, generally very high quality websites vet the other sites that they link to. So it makes sense that Google select a number of “seed” sites and give them a special bit of “trust” juice, which says that whatever site this one links to, is very likely to be of good quality. This trend continues down the chain, but obviously the further down this chain you get, the more and more likely it is that this rule will be broken and someone (maybe even accidentally) will link to what Google considers a “bad” website. For this reason, the (and I use this terminology loosely) “Trust” that is passed on will be dampened at each tier. This allows a margin for calculated error, so if they chain in essence is broken, the algorithm maintains its quality, because it allows for this.

I think most people could name some big, trusted websites. Why not take time to research these sites, really trusted authority sites – one’s that it’s at least a fair bet has some of this magical Trust? Say we have a list of ten of these sites, why not crawl them and get a list of every URL that they link to? Why not then crawl all of these URLs and get a list of all the sites THEY link to? Why not grab the first 3 or 4 “tiers” of sites? Great now, you’ve probably got a few million URLs. Why not let Google help us? Lets query this URLs against the keywords we’re targeting. What you’re left with is a list of pages from (hopefully) trusted domains, that are related to your niche. The holy grail of whitehat link building. Now pester them like a bastard for links! Offer content, blowjobs, whatever it takes!

Wouldn’t it be interesting if we took this list of possible Trusted sites and tied in this theory with how many of our tendrials of trusted networks link to our high-ranking pages? There’s a lot of possibilities here.

This project will be taking up a significant chunk of my time over the next months. Maybe the data will be shit and we won’t find any patterns and it will be a giant waste of time. At least then I can say with confidence that SEO is actually just charm-glasping, pointy hat-wearing, pole chanting black art that so many businesses seem to think it is. At least I’ll be one step closer to finding out.

Apologies once again to SEOMoz if you took offense. I love you x

Posted in Blogging, Google, Marketing Insights, Research & Analytics, Search Engine Optimisation, Social Marketing, White Hat | 10 Comments »