HI Kaan,Sorry, but the scraper files aren't maintained any loger as they are too hard to keep up with.
However, I may update them once more, but I need more paying members to justify the expense. So tell all your friends and neighbors to join.
You can have a freelance perl programmer update or create some scraper files for you, just give them the files.
If you do this, set Fatbomb's cache for a pretty long time, a year or more, or else you'll lose your results each time a new search is made.
Until then, I really recommend learning and using Spiderbomb to build your own databases.
Learn to use "starting points" well in SpiderBomb and you can get the same sites without scraping the SERPs.
Since Fatty uses the sites' full meta tag descriptions, you'll have different results and decriptions than if you merely scraped the SERPs.
After Spiderbomb makes some databases, just download the files and put them in the Linez Tuel to cut line containing domain names you want to exclude.
I also HIGHLY recommend creating other databases by hand, researching the highest quality sites and adding them. This will create a "hub site" for you, which Google loves.
You can also learn Blogbomb and get it working with Fatbomb, which means you can aded all sorts of RSS pheeds and resources.
I'd start with SpiderBomb. If you're on a shared host, run it at night, US time.
Then use the Linez Tuel to help clean up the results.
Tip: If you can find a spinner that won't spin the URLs, spinning the Spiderbomb descriptions and titles is a great way to get unique content. Plus since they are used as SERPs, they don't have to make sense like articles do.
-Boom boom boom boom.