If you’re in the Portland, Oregon area, I’ll be at Powell’s Books at Cedar Hills this Thursday interviewing Matt Kirkland, the creator of the enormously popular Dracula Daily, which originally serialized Bram Stoker’s 1897 novel as a Substack newsletter, creating an internet-scale book club with over 240,000 subscribers, now published as a gorgeous hardcover volume annotated with memes, fan art, and comics from the community.
Bram Stoker’s Dracula is an epistolary novel, told in the form of a series of diary entries and letters, and Dracula Daily delivers each one to subscribers “as-it-happens,” on the day that each message is dated, pacing it out over a period of six months from May to November.
In the parallel universe of last year’s Weird: The Al Yankovic Story, Dr. Demento encourages a young Al Yankovic (Daniel Radcliffe) to move away from song parodies and start writing original songs of his own. During an LSD trip, Al writes “Eat It,” a 100% original song that’s definitely not based on any other song, which quickly becomes “the biggest hit by anybody, ever.”
Later, Weird Al’s enraged to learn from his manager that former Jackson 5 frontman Michael Jackson turned the tables on him, changing the words of “Eat It” to make his own parody, “Beat It.”
This got me thinking: what if every Weird Al song was the original, and every other artist was covering his songs instead? With recent advances in A.I. voice cloning, I realized that I could bring this monstrous alternate reality to life.
Earlier this month, I wrote about Tiny Awards, a tiny prize to honor websites that “best embodies the idea of a small, playful and heartfelt web.” I was invited to be a part of the inaugural award’s selection committee, and helped narrow down the 270 submissions to 16 finalists, which were then open to public voting.
This morning, Tiny Awards announced the winner: the dizzying and delicious Rotating Sandwiches by Lauren Walker. When I linked to it here back in March, I described it simply as the “best of the web, right here,” so I’m pretty happy with this result. Lauren will receive a $500 prize and a tiny trophy. Congrats!
The organizers of the award also released the full list of all 272 nominated websites, a “dizzying snapshot of the boundless creativity and artistic endeavor (and, occasionally, silliness) of the web (and, by extension, the people who make it).”
The organizers originally asked each member of the selection committee to decide on their top two picks from the full list of nominees. Given the volume, diversity, and quality of the entries, this was no easy task.
Now that the winner’s announced, I thought I’d share my own decision-making process, along with my personal list of runners-up.
In May, the creators of two of my favorite newsletters, Naive Weekly and Web Curios, reached out to see if I’d consider joining the selection committee of Tiny Awards, a tiny prize to honor websites that “best embodies the idea of a small, playful and heartfelt web.” I loved the idea and quickly accepted.
There were some additional rules: sites must have launched in the last 12 months, work on mobile and desktop without requiring an app or download, made by individuals or a group of creators (i.e. not agencies or brands), and should be primarily non-commercial.
Nominations were free and open to the public, unlike some other web awards, and the selection committee ended up reviewing over 270 submissions, which we narrowed down to a shortlist of 16 finalists, a wonderfully eclectic collection of websites.
The winner is decided by public voting, which is also free and easy, and closes next Thursday, July 20. I hope you take a look and cast your vote. Here’s a little about each of the finalists. Update:The winner was announced!
For the last two days, Elon Musk has claimed that Twitter is under attack from “several hundred organizations” who were conducting “EXTREME levels of data scraping,” forcing them to bring “large numbers of servers online on an emergency basis” and enact emergency measures.
Yesterday, Twitter started blocking all logged-out access to Twitter, requiring signing in to view any tweet or profile. Elon Musk called it a “temporary emergency measure,” claiming they “were getting data pillaged so much that it was degrading service for normal users!”
Apparently, it didn’t stop the crush of traffic and, this morning, Musk announced they escalated their actions against supposed “extreme levels of data scraping” by rate-limiting the number of tweets you can view.
Immediately, Twitter users started seeing “Rate Limit Exceeded” messages and every trending topic was about the collapse of Twitter:
Are shadowy AI companies scraping Twitter for training data? Maybe!
But on Mastodon this morning, web developer Sheldon Chang noticed another source of unusual traffic: a bug in Twitter’s web app that is constantly sending requests to Twitter in an infinite loop:
This is hilarious. It appears that Twitter is DDOSing itself.
The Twitter home feed’s been down for most of this morning. Even though nothing loads, the Twitter website never stops trying and trying.
In the first video, notice the error message that I’m being rate limited. Then notice the jiggling scrollbar on the right.
The second video shows why it’s jiggling. Twitter is firing off about 10 requests a second to itself to try and fetch content that never arrives because Elon’s latest genius innovation is to block people from being able to read Twitter without logging in.
This likely created some hellish conditions that the engineers never envisioned and so we get this comedy of errors resulting in the most epic of self-owns, the self-DDOS.
Unbelievable. It’s amateur hour.
He posted a video of the bug in action, sending hundreds of requests a minute.
On Twitter, software engineer Nelson Minar independently reproduced the bug with his own video capture.
It’s currently unclear when this bug went into production, or how much it’s actually impacting their traffic, so it’s hard to determine whether this bug inadvertently inspired Twitter to block unregistered access and add rate limits, or if the bug was triggered by the rollout of those changes.
On Bluesky, Twitter’s former head of trust and safety Yoel Roth wrote, “For anyone keeping track, this isn’t even the first time they’ve completely broken the site by bumbling around in the rate limiter. There’s a reason the limiter was one of the most locked down internal tools. Futzing around with rate limits is probably the easiest way to break Twitter.”
Sheldon suspects the bug was related to yesterday’s decision to block unregistered users from accessing Twitter, but in a followup, wrote that it’s “probably not the cause of their scraping panic and most of these requests are being blocked.”
It seems very likely that killing free access to the Twitter API led to a big increase in scraping, since countless businesses, organizations, and individuals used it for their projects. It’s also plausible that these issues are entirely unrelated.
Still, how funny would it be if this “emergency,” from start to finish, was brought on by a Javascript bug that caused Twitter to DDOS itself, spawning all of these truly terrible decisions? At this point in Twitter’s downward spiral, nothing would surprise me.
If you know more, leave a comment or get in touch. Confidentiality guaranteed.