Suck.com, Gone for Good?

Suck.com, one of the most important and influential webzines, appears to be offline permanently, replaced by a porn search portal.

The strangest part is that the domain continues to belong to Lycos, with Hotwired acting as the nameservers. If you query ns1.hotwired.com for the suck.com domain, it returns 198.65.105.202, an IP address of a Verio server currently hosting over 36,000 domains. The server is owned by a company called ParkingDNS.net, which seems to be hosting nothing but Parkingdots.com affiliate search portals.

It also appears that there’s no complete archive of Suck.com remaining anywhere online. Because the new owners have blocked web crawlers, Archive.org has purged blocked access to the archived version of the site. (Last year, Suckarchives.com expired and was snatched up by a squatter.)

If permanent, this is a tragedy for anyone who cares about the web’s history. Does anyone at Lycos know what’s going on? Also, if anyone out there has a complete copy of the Suck archives, please get in touch. (If you need to submit it anonymously, that’s fine.)

Update: Interesting stuff in the comments below. Greg Knauss, himself a Suck.com contributor, is proxying requests to the old Suck.com server through his own server at suck.eod.com. Also, Mike at Injoke.com posted a 200MB torrent of the entire Suck.com archive. Update: Boy genius Aaron Swartz is mirroring the Suck.com snapshot from Mike’s torrent. Nice work!

This doesn’t change the fact that every link to a Suck.com article is still broken, but at least the articles aren’t lost.

January 2, 2006: Suck.com is back! Someone out there must have the inside story of what went on over the past few days.

Comments

    Because media companies are stupid.

    Rather than regarding archive.org as a backup

    and a chance to become part of the historic record of the web, they want control (and possibly non-existant revenues from people paying for archives).

    While it is good an archive is available as a torrent, it isn’t the same as anyone being able to search for a Suck piece (or stumble upon it). Not to mention all the dead links.

    It seems that someone at Wired News should do a story on this.

    I’ve put up a site that redirects requests to where suck.com used to point, so you can still get access to all the articles: http://suck.eod.com.

    This isn’t a mirror — it just adds the proper headers and makes the request of Carl’s copy of the archive.

    I hope Lycos points suck.com back to where it belongs, because even though the articles are now available, all the links out there are broken.

    “Because the new owners have blocked web crawlers, Archive.org has purged the entire domain from its historical archive.”

    If Archive.org acts this way, it’s ridiculous… they should only block those years from the archive which actually blocked crawlers at the time!

    Actually, archive.org is BLOCKING access to its archived data because of the no robots policy on the current suck.com site. It has NOT “purged” the data.

    (This is how rumors get started, guys; let’s be a bit more exact about details, yes?)

    Here’s that as a link: http://suck.mirror.theinfo.org/

    Lenssen suggests archive.org “should only block those years from the archive which actually blocked crawlers”. If that were the case, they wouldn’t need to block the years — they wouldn’t have any data to block. The reason they have to make it retroactive is so that people who don’t know about the Web Archive can get their data out.

    Ironic. I originally typed the URL, suck.com, into my broswer (mosaic or netscape) because I thought for sure that it would be a porn site.

    Life is strange.

    Suck is dead. Long live Suck.

    You know something — I think using Robots.txt is the absolute worse way to indicate to the archive to purge (or block) old versions of the page. Understandably, the Robots.txt was placed there before the address Suck.com itself was sold/stolen/hijacked/expired or whathaveyou, but what if what happened to Suck.com happened today, and the new pornographic owners used Robots.txt to keep search engines from crawing his new porno site — why should that cause the previous Suck.com to be sent to digital oblivion? If someone wants the digital archive to not keep the old archives of their site, I think it really should be incumbant on them to write a letter to the archives itself stating this.

    Now that Suck is back online, and the original robots.txt file restored, Archive.org is allowing access to the Suck archive again.

    At the time I wrote my post, the robots.txt file was blocking all spiders and Archive.org was denying access to the archive.

    Suck.com nowadays is “just” a historical curiosity, but at the time it was a damn good read. Actually, I’ll probably go reread a chunk just to see how well it’s held up.

    Still checking suck.com every once a while and I’m glad I didn’t check in it’s downtime.

    Just one question, maybe one of you guys can help:

    Is there anything as good as suck.com or at least slightly comparable?

    Looking for an appropriate compensation but can’t find anything…

Comments are closed.