, Gone for Good?, one of the most important and influential webzines, appears to be offline permanently, replaced by a porn search portal.

The strangest part is that the domain continues to belong to Lycos, with Hotwired acting as the nameservers. If you query for the domain, it returns, an IP address of a Verio server currently hosting over 36,000 domains. The server is owned by a company called, which seems to be hosting nothing but affiliate search portals.

It also appears that there’s no complete archive of remaining anywhere online. Because the new owners have blocked web crawlers, has purged blocked access to the archived version of the site. (Last year, expired and was snatched up by a squatter.)

If permanent, this is a tragedy for anyone who cares about the web’s history. Does anyone at Lycos know what’s going on? Also, if anyone out there has a complete copy of the Suck archives, please get in touch. (If you need to submit it anonymously, that’s fine.)

Update: Interesting stuff in the comments below. Greg Knauss, himself a contributor, is proxying requests to the old server through his own server at Also, Mike at posted a 200MB torrent of the entire archive. Update: Boy genius Aaron Swartz is mirroring the snapshot from Mike’s torrent. Nice work!

This doesn’t change the fact that every link to a article is still broken, but at least the articles aren’t lost.

January 2, 2006: is back! Someone out there must have the inside story of what went on over the past few days.


    Because media companies are stupid.

    Rather than regarding as a backup

    and a chance to become part of the historic record of the web, they want control (and possibly non-existant revenues from people paying for archives).

    While it is good an archive is available as a torrent, it isn’t the same as anyone being able to search for a Suck piece (or stumble upon it). Not to mention all the dead links.

    It seems that someone at Wired News should do a story on this.

    I’ve put up a site that redirects requests to where used to point, so you can still get access to all the articles:

    This isn’t a mirror — it just adds the proper headers and makes the request of Carl’s copy of the archive.

    I hope Lycos points back to where it belongs, because even though the articles are now available, all the links out there are broken.

    “Because the new owners have blocked web crawlers, has purged the entire domain from its historical archive.”

    If acts this way, it’s ridiculous… they should only block those years from the archive which actually blocked crawlers at the time!

    Actually, is BLOCKING access to its archived data because of the no robots policy on the current site. It has NOT “purged” the data.

    (This is how rumors get started, guys; let’s be a bit more exact about details, yes?)

    Here’s that as a link:

    Lenssen suggests “should only block those years from the archive which actually blocked crawlers”. If that were the case, they wouldn’t need to block the years — they wouldn’t have any data to block. The reason they have to make it retroactive is so that people who don’t know about the Web Archive can get their data out.

    Ironic. I originally typed the URL,, into my broswer (mosaic or netscape) because I thought for sure that it would be a porn site.

    Life is strange.

    Suck is dead. Long live Suck.

    You know something — I think using Robots.txt is the absolute worse way to indicate to the archive to purge (or block) old versions of the page. Understandably, the Robots.txt was placed there before the address itself was sold/stolen/hijacked/expired or whathaveyou, but what if what happened to happened today, and the new pornographic owners used Robots.txt to keep search engines from crawing his new porno site — why should that cause the previous to be sent to digital oblivion? If someone wants the digital archive to not keep the old archives of their site, I think it really should be incumbant on them to write a letter to the archives itself stating this.

    Now that Suck is back online, and the original robots.txt file restored, is allowing access to the Suck archive again.

    At the time I wrote my post, the robots.txt file was blocking all spiders and was denying access to the archive. nowadays is “just” a historical curiosity, but at the time it was a damn good read. Actually, I’ll probably go reread a chunk just to see how well it’s held up.

    Still checking every once a while and I’m glad I didn’t check in it’s downtime.

    Just one question, maybe one of you guys can help:

    Is there anything as good as or at least slightly comparable?

    Looking for an appropriate compensation but can’t find anything…

Comments are closed.