Opening the Pandora’s Box of AI Art

Last month, I finally got access to OpenAI’s DALL·E 2 and immediately started exploring the text-to-image AI’s potential for creative shitposting, generating horror after horror: the Eames Lounge Toilet, the Combination Pizza Hut and Frank Lloyd Wright’s Fallingwater, toddler barflies, Albert Einstein inventing jorts, and the can’t-unsee “close up photo of brushing teeth with toothbrush covered with nacho cheese.”

DALL·E 2 diligently hallucinated each image out of noise from the compressed latent space, multi-dimensional patterns discovered in hundreds of millions of captioned images scraped from the internet.

The prompt that finally melted my brain was the one above, with images of slugs getting married at golden hour. I originally specified a “tuxedo and wedding dress” with predictable results, but changing it to “wedding attire” gave the AI the flexibility to depict variations of what slugs might marry in, like headdresses made of cotton balls and honeycomb.


I’ve never felt so conflicted using an emerging technology as DALL·E 2, which feels like borderline magic in what it’s capable of conjuring, but raises so many ethical questions, it’s hard to keep track of them all.

There are the many known issues that OpenAI’s acknowledged and worked to mitigate, like racial or gender biases in its image training set, or the lengths they’ve gone to avoid generating sexual/violent content or recognizable celebrities and trademarked characters.

But it opens profound questions about the ethics of laundering human creativity:

  • Is it ethical to train an AI on a huge corpus of copyrighted creative work, without permission or attribution?
  • Is it ethical to allow people to generate new work in the styles of the photographers, illustrators, and designers without compensating them?
  • Is it ethical to charge money for that service, built on the work of others?

There are basic fundamental questions about whether it’s even legal: these are largely untested waters in copyright law and it seems destined to end up in court. Training deep learning models on copyrighted material may be fair use, but only a judge can decide that. (The fact that OpenAI’s removing some results from the image training set, like celebrity faces and Disney/Marvel characters, suggests they’re well aware of angering the biggest litigants.)

As these models improve, it seems likely to reduce demand in some paid creative services, from stock photography to commissioned illustrations. I empathize with the concerns of artists whose work was silently used to train commercial products in their style, without their consent and with no way to opt-out.


The world was just starting to grapple with the implications of this technology when, on Monday, a company called Stability AI released its Stable Diffusion text-to-image AI publicly.

Stable Diffusion is free, open-source, runs on your own computer, and ships without any of the guardrails and content filters of its predecessors. It comes with a Safety Classifier enabled by default that tries to determine if a generated image is NSFW, but it’s easily disabled.

Unlike existing AI platforms like DALL·E 2 and Midjourney, Stable Diffusion can generate recognizable celebrities, nudity, trademarked characters, or any combination of those. (Try searching Lexica, the newly-launched Stable Diffusion search engine, for example output.)

Releasing an uncensored dream machine into the wild had some predictable results. Two days after its release, Reddit banned three subreddits devoted to NSFW imagery made with Stable Diffusion, presumably because of the rapid influx of AI-generated fake nudes of Emma Watson, Selena Gomez, and many others.

Screenshot of message explaining the "Stable Diffusion NSFW" subreddit was banned for violating Reddit's rules against non-consensual intimate media

The permissive license on Stable Diffusion allows commercial services to implement its AI model, such as NightCafe, which encourages paying customers to generate art in the styles of living artists like Pendleton Ward, Greg Rutkowski, Amanda Sage, Rebecca Sugar, and Simon Stålenhag, who has spoken out against the practice.

Screenshot from NightCafe with a list of artist names recommended as modifiers for art prompts
List of artist modifiers in NightCafe

On top of it, Stable Diffusion’s terms state that every image generated with their Dream Studio is effectively public domain, under the CC0 1.0 Public Domain license. They make no claim over the copyright of images generated with the self-hosted Stable Diffusion model. (OpenAI’s terms says that images created with DALL·E 2 are their property, with customers granted a license to use them commercially.)


A common argument I’ve seen is that training AI models is like an artist learning to paint and finding inspiration by looking at other artwork, which feels completely absurd to me. AI models are memorizing the features found in hundreds of millions of images, and producing images on demand at a scale unimaginable for any human—thousands every minute.

The results can be surprising and funny and beautiful, but only because of the vast trove of human creativity it was trained on. Stable Diffusion was trained on LAION-Aesthetic, a 120-million image subset of a 5 billion image crawl of image-text pairs from the web, winnowed down to the most aesthetically attractive images. (OpenAI has been more cagey about its sources.)

There’s no question it takes incredible engineering skill to develop systems to analyze that corpus and generate new images from it, but if any of these systems required permission from artists to use their images, they likely wouldn’t exist.


Stability AI founder Emad Mostaque believes the good of new technology will outweigh the harm. “Humanity is horrible and they use technology in horrible ways, and good ways as well,” Mostaque said in an interview two weeks ago. “I think the benefits far outweigh any negativity and the reality is that people need to get used to these models, because they’re coming one way or another.” He thinks that OpenAI’s attempts to minimize bias and mitigate harm are “paternalistic,” and a sign of distrust of their userbase.

In that interview, Mostaque says that Stability AI and LAION were largely self-funded from his career as a hedge fund manager, and with additional resources, they’ve created a 4,000 A100 cluster with the support of Amazon that “ranks above JUWELS Booster as potentially the tenth fastest supercomputer.”

On Monday, Mostaque wrote that they plan to use those compute resources to expand to other AI-generated media: audio next month, and then 3D and video. I’d expect Stability AI to approach these new models in the same way, with little concern over their potential for misuse by bad actors, and with even less attention spent addressing the concerns of the artists and creators whose work makes them possible.


Like I said, I’m conflicted. I love playing with new technology, and I’m excited about the creative potential of these new tools. I want to feel good about the tools I use.

I don’t trust OpenAI for a bunch of reasons, but at least they seemed to try to do the right thing with their various efforts to reduce bias and potential harm, even if it’s sometimes clumsy.

Stable Diffusion’s approach feels irresponsible by comparison, another example of techno-utopianism unmoored from the reality of the internet’s last 20 years: how an unwavering commitment to ideals of free speech and anti-censorship can be deployed as a convenient excuse not to prevent abuse.

For now, generative AI platforms are some of the most resource-intensive projects in the world, leading to a vanishingly small number of participants with access to vast compute resources. It would be nice if those few companies would act responsibly by, at the very least, providing an opt-out for those who don’t want their work in future training data, finding new ways to help artists that do choose to participate, and following the lead of OpenAI in trying to minimize the potential for harm.

I don’t pretend to know where these things will go: the risks may be overblown and we may be at the start of a massive democratization in the creation of art, or these platforms may make the already-precarious lives of artists harder, while opening up new avenues for deepfakes, misinformation, and online harassment and exploitation. I’d really like to see more of the former, but it won’t happen on its own.

Waxy.org Turns 20

Hard to believe, but I started blogging 20 years ago today with this short post.

In my first ten years of writing, I published 415 posts and over 13,000 links. And in the last ten years, I published 136 posts and a little over 5,000 links, a pretty big drop from the ten years before.

There are some pretty obvious reasons why my posting slowed since 2012:

  • XOXO started that year, which became a big creative outlet for me, as well as a big time sink.
  • My long-form writing shifted elsewhere, with my column in WIRED and as a member of The Message publication on Medium, while short-form writing continued to land on Twitter.
  • I became more focused on quality than quantity, with a higher bar for what made it here.
  • I was less motivated to invest time in writing, in part because fewer people were reading.

I still enjoy writing though, and have no intention of stopping any time soon.

Ten years ago, I wrote a roundup of my favorite posts from my first decade of blogging, and I thought I’d do the same thing for 2012-2021. If you missed them the first time around, I hope you check them out this time. Looking back on the last ten years, I’m proud of so many of these pieces.


2012

Introducing Playfic. Announcing the launch of Playfic, a tool for writing and sharing Inform 7 interactive fiction games in the browser. Nearly 3,000 games have been published so far, I rounded up some highlights in 2013. (These days, I’d recommend using Borogove.)

The Perpetual, Invisible Window Into Your Gmail Inbox. I wrote about Unroll.me and similar apps that were quietly requesting access to all your email, an issue that exploded five years later when it was revealed they were selling user info to Uber, among others.

YouTube’s Content ID Disputes Are Judged by the Accuser. Raising awareness of YouTube’s end-run around the DMCA, which continues to be an issue today.

A Patent Lie: How Yahoo Weaponized My Work. This article blew up pretty big, in which I talk about how tech corporations encourage developers to patent their work, ostensibly for defensive purposes, only to find them used in litigation to stop innovation, popularizing the term “weaponized patents” in the process.

Instagram’s Buyout: How Does It Measure Up? Crunching the numbers on Instagram’s billion-dollar sale to Facebook against other notable acquisitions to see how it measured up. Instagram made $26 billion in ad revenue last year, more than Facebook itself, so a pretty smart deal.

Criminal Creativity: Untangling Cover Song Licensing on YouTube. Trying to unravel the surprisingly complicated question of whether a cover song uploaded to YouTube is infringement or not.

Introducing XOXO. Launched on Kickstarter, sold every ticket in 50 hours.

The Unified Theory of XOXO. Once the dust settled from the first XOXO, I wrote about what we were trying to do and the decisions we made — all of which are still part of the festival today.

2013

Aaron. Remembering Aaron Swartz.

The New Prohibition. Occasionally, my posts end up turning into conference talks, like in this Creative Mornings presentation.

The Death of Upcoming.org. I found out Yahoo was shutting down Upcoming like everyone else, with 11 days’ notice. With Archive Team’s help, we were able to collectively archive the vast majority of the site, allowing me to later restore nearly every event to its original URL.

Remembering XOXO 2013. Where we started really figuring things out.

Screens on Screen. A huge dump of fake computer screens in movies, and the projects that popped up around it.

GoldieBlox and the Three MCs. Copyright and fair use analysis of a repurposed parody of the Beastie Boys’ “Girls” for a toy commercial.

2014

Ellen DeGeneres’ “Walter Mitty” Screener Leaks Online. I was the first to report on this screener linked to a celebrity, which got coverage in Variety, Hollywood Reporter, Deadline, and many more.

‘JIF’ Is the Format. ‘GIF’ Is the Culture. Steve Wilhite may have designed the GIF format, but the looping animated GIF was a product of the web, invented eight years later.

72 Hours of #Gamergate. Analyzing over 316,000 tweets that mentioned #Gamergate to spot trends and visualize the network, including clear evidence that most supporters were using newly-created accounts.

Diary of a Corporate Sellout. A personal post about the risks that come from selling your startup when it’s also an online community. “When you sell the house, you’re not just selling a house. You’re selling everyone inside.”

How to Flawlessly Predict Anything on the Internet. I still love this post, explaining how a classic confidence scam could be adopted to social media with convincing results.

Playing With My Son. One of my all-time favorites, the story of playing videogame history with my son in (roughly) chronological order. I repurposed this one for a talk at Gel 2015, with my son in the front row.

2015

Pirating the 2015 Oscars: HD Edition. An interesting shift in screener leaks: pirates didn’t want them anymore because DVDs were increasingly considered poor-quality. “Pirates are now watching films at higher quality than the industry insiders voting on them.”

Never Trust A Corporation To Do A Library’s Job. My love letter to the Internet Archive, and Google’s failure to live up to their original mission statement to organize the world’s information.

If Drake Was A Piano. My experiments with converting MP3s to MIDI and back.

2016

Remembering XOXO 2016. 2016 was a busy year, between opening and closing the XOXO Outpost (our massive workspace for indie artists), working on the Upcoming reboot, and holding the fifth year of the festival. I didn’t get a lot of writing done.

Redesigning Waxy. I did squeeze in a redesign though, and some thoughts on blogging in 2016.

Creativity in a Post-Trump America. Just not a great year.

Go to Bed.

2017

The Long Cold Winter. Announcing the closure of the Outpost and the relaunch of Upcoming.

This Must Be The /r/Place. One of my favorite projects ever, led by future Wordle creator Josh Wardle.

Closing Communities: FFFFOUND! vs MLKSHK. Two very different approaches to shutting down an online community.

Pogo’s Politics. This post about Australian remix artist Pogo still gets traffic any time his name comes up, and people become aware of his repulsive views on women. “It’s hard to truly enjoy art made by someone you can’t respect.”

The Flagpole Sitta Lip Dub Turns 10. Reminiscing about a viral trend in the mid-2000s, and the video that helped popularize it.

You Think You Know Me. Announcing my wife Ami’s first card game, which I help edit and design, now published under the moniker Pink Tiger Games. Her fourth game, Lost for Words, is coming out later this year, this one co-designed with our son, Eliot. It’s turned into a real family business!

2018

A Tribute to YouTube Annotations. Six weeks before YouTube retired its annotations feature, I collected as many notable examples as I could find. Sadly, they’re all no longer interactive.

Demi Adejuyigbe at XOXO 2018. My only post about XOXO 2018, which was more than double the size in a new venue and absolutely exhausting, but still really memorable. Lizzo played the closing party and then sang karaoke with everyone! I regret not writing more about it while it was fresh in my mind.

Why You Should Never, Ever Use Quora. The most regressive archiving policy of any online community, it’s likely to be an epic loss of collected knowledge when they eventually close down.

2019

Dad. I don’t talk about my personal life often, but I sometimes make an exception for close friends and family I’ve lost.

Suck.com, Gone for Good (For Good). It’s been returning nothing but a PHP error (with database credentials!) for over a year.

Fast and Free Music Separation with Deezer’s Machine Learning Library. A series of AI audio experiments, an area I’ve been following for some time.

Turning Photos into 2.5 Parallax Animations with Machine Learning. A good excuse for me to learn how Google Colab notebooks work.

Unraveling the Mystery of “Visit Eroda,” The Tourism Campaign For An Island That Doesn’t Exist. A delightful ARG-like campaign that I followed in real-time as it developed, and the fascinating cultural divide between Harry Styles fans and ARG fans who didn’t want to believe.

How Artists on Twitter Tricked Spammy T-Shirt Stores Into Admitting Their Automated Art Theft. I want this post on a t-shirt.

2020

Paste Parties: The Ephemeral, Chaotic Joy of Random Clipboards. How I celebrate my birthday online every year: asking everyone to tweet me their unedited clipboards.

With questionable copyright claim, Jay-Z orders deepfake audio parodies off YouTube. The legal implications of AI-generated music are complex and fascinating.

OpenAI’s Jukebox Opens the Pandora’s Box of AI-Generated Music. Two days after my Jay-Z post, OpenAI released a neural network that could generate music in the style of various artists, with 7,100 song samples.

alt.binaries.images.underwater.non-violent.moderated: a deep dive. Solving a Usenet newsgroup curiosity, over 20 years later.

The House on Blue Lick Road. 2020’s best game was a 3D real estate listing of a sprawling hoarder house. I had to know more, so I picked up the phone and called the owner.

2021

Announcing Skittish. I spent all of last year working on Skittish, a virtual event space where you navigate the world as a little animal and talk to people near you with your microphone. It evolved quickly, hosted its first public events in June, and launched in November. I’m still working on it. You should check it out.

Colin’s Bear Animation, Revisited. Digging into the genealogy of a TikTok meme that bizarrely recreated the dance from Colin’s Bear Animation video, but with no other reference to the original.

Pirating the Oscars: Pandemic Edition. The pandemic really messed with my Oscar screener charts.


And that’s pretty much up to today. Thanks for sticking around and thanks for reading. See you in ten years?

In the Shadow of the Star Wars Kid

Last August, I entered a loft in downtown Portland, walked through a door, and met someone I’ve wanted to talk to for the last 20 years: Ghyslain Raza, the unwilling subject of the “Star Wars Kid” meme, the biggest viral video of the pre-YouTube era.

Since the video and its remixes exploded online in 2003, Ghyslain has refused all interview requests, except for the 10th anniversary of the video’s release in 2013 for an interview with a French-Canadian journalist for L’actualité magazine, which was translated into English for partner magazine Maclean’s.

But over the last couple years, he’s quietly worked with a group of documentary filmmakers to tell his story for the first time, in his own words. The full-length film was released today in French and English, as you’d expect from Quebec-based filmmakers. In English, it’s being released as Star Wars Kid: The Rise of the Digital Shadows, but I’m partial to the French title, Dans l’ombre du Star Wars Kid, which translates to “In the Shadow of the Star Wars Kid.” It feels much more fitting to the story they told.

It’s now available for streaming free from the National Film Board of Canada’s site, and I highly recommend watching it. I was lucky enough to get an advance screener and it’s a powerful film expertly told. Update: After a short window, the documentary can now only be viewed in Canada. No word on when it’ll be available elsewhere.

Making the Documentary

In February 2021, the documentary’s director, Mathieu Fournier, reached out to see if I’d speak to them about my role in the video’s initial spread and the fundraiser we held for him, my ultimately-futile attempt to shift the narrative to a positive light.

I’ve declined every interview request about this subject since 2003, but was surprised to hear that Ghyslain himself was deeply involved in the production, so I immediately agreed to participate.

The production team came to Portland for the filming, where I was interviewed for a couple hours by the filmmakers. Then, Ghyslain and I sat down for a long one-on-one conversation on camera about everything that happened 20 years ago, the impact it had on his life, and how he looks back on it now.

I’ve never talked about it publicly, but I regret ever posting it. From the start, it was obvious it was never meant to be seen, and mirroring it on my site without consent was wrong in a way that I couldn’t see when I was in my 20s, one year into blogging. I removed the videos once it was clear how it was affecting him, but I never should have posted them in the first place.

Meeting Ghyslain gave me the opportunity to tell him all of that in person, as well as in my interviews, some of which made it into the finished film.

As a side note, it was fascinating to get answers to questions I’ve wondered about for 20 years. Yes, Ghyslain actually received the iPod we sent him from the fundraiser, and used the gift cards we sent him to buy an iMac G4, both of which he kept to this day. He managed to avoid most of the remixes and media coverage, except for Arrested Development, which he watched live as it aired.

But more than anything, it was great to finally talk to him in person and see that he’s doing well. By all accounts, he handled everything that happened back then with a profound emotional maturity, despite how painful it was, and emerged on the other side with a uniquely interesting perspective that’s worth listening to.

Afterwards

After the documentary taping, we all met up for drinks on the roof deck at Revolution Hall, where we hold XOXO every year, and then went out for dinner and more drinks until late at night.

This time, Ghyslain and I were able to talk privately off camera, about our lives and families, about the Commodore 64 and typography, finding natural common ground. When he was younger, he was really into computers, but for obvious reasons, Ghyslain spent much of his life offline after 2003.

Like so many others, I saw my geeky teenage self when first watching the Star Wars Kid video, and sitting across from this 34-year-old man, I saw a parallel-world version of myself in my 30s. I first fell in love with the internet at age 15, the age Ghyslain was when he made the video.

That night, I couldn’t help but wonder how his life would have changed if it never happened. I was surprised to see that in the final film, there’s a moment where Ghyslain talks about our meeting, and wonders exactly the same thing. I hope you take the time to watch it.

Thanks to Ghyslain for his generosity and empathy, and thanks to the filmmakers for making this meeting possible: something I’ve quietly hoped would happen for 20 years.

Me and Ghyslain, August 2021

The Incredibly Satisfying Birth of a Marble

A couple weeks ago, I saw a TikTok video that blew my mind, leading me down multiple rabbit holes, starting with the epiphany that marbles are made on a marble run?! What an exciting way to be born.

The Soundtrack

The first thing I wondered about was the background music of the video, which came from a channel of Chinese industrial videos and seemed like a strange match even for TikTok.

Some digging led me to 77-year-old German musician Achim Reichel’s “Aloha Heja He,” a pop-rock sea shanty that was a hit for him in the early 1990s, one of several albums of sea shanties he’d recorded since the 1970s.

Continue reading “The Incredibly Satisfying Birth of a Marble”

Pirating the Oscars 2022: The Rise and Fall of the Screener Over 20 Years

It’s Oscar night! And for the second year in a row, every single Oscar-nominated film has leaked in HD quality before the ceremony, but only three of the 32 nominated films leaked as Oscar screeners, a radical change from years past. What the hell is going on?

Back in 2004, I started tracking the illicit distribution of Oscar screeners because the Academy of Motion Picture Arts and Sciences was seemingly in denial, or completely unaware, that virtually every nominated film routinely leaked online. (The L.A. Times headline that inspired it still makes me laugh.)

That project turned into an annual ritual where I’d wake up on the day the nominees were announced and add to a spreadsheet now covering 643 Oscar nominees across 20 years and write up my analysis about the trends that emerged from the data.

Continuing the trend from last year, we can see the pandemic accelerating a rapid decline of interest in screeners. Let’s take a look at the data to see why.

Death of the Screener

Screeners, as we know them, are dead. But not for the reasons you might think.

Last year, the Academy announced they were finally banning physical screeners. Voters would no longer receive DVDs or Blu-Rays by mail, with screeners exclusively available through Academy Screening Room, a free video streaming app for iOS, Apple TV, and Roku accessible only by Oscar voters.

Screenshot of the Academy Screening Room app for Roku

Studios and filmmakers are charged $12,500 for inclusion in Academy Screening Room and an additional $5,000 fee for optional forensic watermarking, and required to follow rigid technical specifications for video, audio, captioning, and art. (Animated features, international films, documentaries, and shorts are exempt from the fee.) Presumably, this is still much cheaper for studios than producing and distributing physical screeners to all 9,487 eligible voting members, especially when individually watermarking to discourage piracy.

With so many voters still avoiding theaters because of the Omicron variant, and without the option of physical screeners, it’s safe to say that virtually every Academy voter was using Academy Screening Room this year. Over 160 films from 2021 were available in Academy Screening Room, including every eventual Oscar nominee.

To ease studio concerns over security, the Academy partnered with several technology companies for secure digital delivery of screeners, including Brightcove’s streaming video platform, NAGRA NexGuard Streaming’s forensic watermarking, Akamai’s Adaptive Media Delivery, and BuyDRM’s KeyOS MultiKey Service.

With all this DRM and forensic watermarking, could this spell the end of leaked digital screeners forever? If history is any guide, no. Historically, any DRM and watermarking can be defeated or bypassed, often with a surprisingly trivial amount of effort, if there’s demand for what it’s protecting.

Screeners aren’t dead in the piracy scene because physical screeners are gone, or because digital screeners are any harder to pirate. They’re dead because nobody cares about them anymore.

Screeners Are Irrelevant

There are two trends we can see over the last few years that have transformed how films are leaked online, and both of these radically accelerated during the pandemic.

  1. Fewer screeners are leaking than ever. Only three nominated screeners have leaked in each of the last two years, 9% of nominees compared to 80-90% 20 years ago and 30-50% pre-pandemic.
  2. Nominated films leak faster than ever. It used to take a median 10-11 weeks for the first high-quality leak of a movie online. The median for the last two years has been between 1-3 weeks.

As I noted last year, the pandemic destroyed the traditional release window between theatrical and streaming/video-on-demand dates. Movies used to have a theatrical window of exclusivity, typically 75 to 90 days, enforced by deals with theater owners. These windows vanished during the pandemic as box office ticket sales cratered, either because theaters were closed or people were just staying home.

Studios and streaming platforms opted instead for streaming-only releases, or they were released simultaneously in theaters and online as “day-and-date” releases, to Amazon Prime Video, Netflix, HBO Max, Disney+, Paramount+, or other streaming platforms.

The result was that every film leaked online in HD format from streaming/video-on-demand platforms before Oscar screeners were even released, rendering screeners effectively useless.

To be clear, this is continuing a trend that started long before the pandemic. We can see the impact of tightening release windows starting in 2015, but their decreasing desirability started in 2009, first driven by demand for higher-quality video than DVD screeners could provide and then by the ubiquity of streaming platforms.

The Rebirth of Screeners

20 years in, I’m tempted to end the project. It feels like a good stopping point. It’s clear that pirates won this fight, and as far as pirates are concerned, screeners are now largely irrelevant.

And yet, I’m still very curious what will happen when the worst effects of the pandemic on the film industry subside. Studios are planning to return to theatrical windows this year, though with an emerging industry standard of 45 days, down from the 75- to 90-day windows of the past.

It seems likely the gap between theatrical and streaming release dates will start to rise again, creating more pressure and demand for screeners. The Academy is putting a lot of faith in their technology for Academy Screening Room, but it feels like the only thing saving them right now is a lack of interest in what they’re protecting.

If that changes, the attention of every scene release group in the world will turn to getting access to an Academy Screener Room account, defeating its DRM, and removing any digital watermarks. The first to unlock it will find itself in possession of an incredible treasure chest: instant access to every screener submitted “for your consideration.”

So maybe I’ll keep this going a little longer to see what happens. Grab some popcorn, enjoy the show, and I’ll see you next year. 🍿🏴‍☠️

Academy Screening Room screenshot

Skittish Is Live!

As you may remember, I’ve spent the last few months working on Skittish, a playful space for virtual events and gatherings of all kinds — requiring only a browser and microphone, using spatial 3D audio to talk to others around you. Skittish was built in a 3D engine with a powerful but simple editor, making it easy to customize the world.

It took some time, but I’m happy to say Skittish is now open to everyone, along with a new homepage and public demo showing how it works. Anyone and everyone can now create their own world, start editing, and invite others to join you. Go try it out!

Huge thanks to all the beta testers, event organizers, and creators who used Skittish over the last few months, but special thanks to the XOXO community for their continued support and enthusiasm.

And extra extra special thanks to Simon Hales, lead engineer for the project, for everything he’s done to make using Skittish (and working on it) so joyful.

virodome17 Is A Gmail Scammer

Pardon the extremely-specific post, but I’ve found myself at the center of a bizarre case of mistaken identity and writing publicly about it seemed like the best option to stop it.

Someone with the email address ‘[email protected]’ is emailing small independent online product manufacturers with an identical scam: they’re a huge fan of their products, but cut themselves on the packaging while opening it, and want a refund and damages. Screenshots of two examples are below, minus the photo of a gross bloody finger.

These companies are then contacting me via Twitter, Instagram, and email because they think that I’m the one that sent it. What gave them that impression? Well, take a look for yourself at the Google results when searching for that email address.

Despite the keyword “virodome17” not appearing anywhere on those pages, Google not only returns my 2016 tweets about Gmail’s “mic drop” April Fool’s joke, but also my LinkedIn page.

Combine this with the fact that the scammer signs his name “Andy,” and you can see where anyone would get the wrong idea that I was the sender. Is the scammer even impersonating me? It’s hard to say — “Andy” is a common name, and they’re not using my last name or any other aspect of my identity. They also don’t have control over what pages the Google algorithm returns, so it’s plausible this is just a bizarre coincidence.

Regardless, Google is ultimately responsible in two ways:

  1. The Google algorithm is irrelevantly returning my personal information for a completely unrelated search, leading to this identity mixup.
  2. Despite multiple reports to Gmail of fraudulent activity over the last year, the [email protected] is still actively attempting to defraud others with a Gmail account. I received two separate companies contacting me about this issue in the last 24 hours alone.

My hope is that Google indexes this blog post and it starts showing prominently for anyone searching for the scammer’s “[email protected]” address. But if you work on Gmail or Google Search, it’d be amazing if you could do something about it.

If you’re a company that received this scam and found this page, please post a comment about your experience. I’d love to see more screenshots, and I’ll post an update here if anything changes.

As a fun linguistics side note, I was curious about how both emails end with “do the needful,” a turn-of-phrase I’ve never heard before. Digging into it, this expression is apparently popular in India but rarely used outside of it, meaning “do the right thing.” The Guardian calls it “the granddaddy of all Indianisms,” so I think I have a pretty good hunch where this scammer’s from.

Update: Another company reached out to me on Twitter with the same experience, and I’ve confirmed privately they were using the same template scam. Multiple Google employees also contacted me privately to say they’ve escalated this search ranking issue, so I hope this will be resolved soon.

In the comments, yet another company the scammer contacted noted that the Gmail account is now bouncing, indicating Google’s taken action against it.

And, as predicted, the #1 Google result for [email protected] searches is now this post.

I doubt this will put an end to the scam, but it’ll hopefully end my role in it.

Skittish Hosts Its First Public Events

Last month, we reached a big milestone for Skittish, the playful virtual space for online events I’ve been working on over the last few months: we hosted our first big public events, including the delightful !!Con, New Relic’s FutureStack conference, the Flatpack Festival, and Future of the Browser, among others.

This was a bit of a marathon, allowing us to see how Skittish worked in the real world in a variety of different events, from film screenings and unconferences to livestreamed talks and dance parties. Throughout it all, we continually tweaked and tuned it every day, making changes and fixing issues as they came up.

The result was a huge wave of new features and development in May, which I wrote about over at the Skittish blog, along with a new feature article from TechCrunch and my FutureStack talk about it all. Go check it out!

This is likely the last big update before we start sending out invites to the announcement list and opening the doors to the public. If you’re interested, you can sign up at Skittish.com, subscribe to the news blog, or follow @SkittishHQ on Twitter and Instagram to follow along.

Colin’s Bear Animation, Revisited

13 years ago, I wrote about a 16-second video I instantly fell in love with and interviewed its creator. (It still holds up.)

This week, I saw a meme pop up on TikTok where literally tens of thousands of people re-enacted the Colin’s Bear Animation dance — but with no reference to the original and entirely different audio.

Instead of Mother 3’s “Funky Monkey Dance,” the soundtrack is a deep-fried muddy version of Pharrell Williams’ “Happy,” which seems like it was first uploaded to TikTok by @zunknownhamster, kicking off the meme with this video viewed 1.5 million times.

@zunknownhamster

#meme #funny #aot

♬ Why is video viral – Ok

Stripped of its original “college animation class” context, the new meme format cracks a joke about the name of some movie, show, game, or other media property, followed by “idk i never watched it” or some variation.

There are over 22,000 of these:

stranger things fans when things get stranger
death note fans when the death is noted
jojos bizarre adventure fans when jojos adventure is bizarre
when you stay at your friend Freddys house for about a week
when people build forts in the night
skyblock players when there’s a block in the sky

You get the idea.

But how did it end up on TikTok? I messaged @zunknownhamster to see where they first found it, but it’s clearly sourced from Kemdizzzle’s Garfield Dancing to Happy, uploaded to YouTube in June 2019.

That video replaced the audio from this February 2017 episode of Fatal Farm’s Lasagna Cat, a surreal webseries that ran from 2008 to 2017, and featured this pitch-perfect tribute to Colin’s Bear Animation.

Arriving 13 years after the original meme, it wouldn’t surprise me if most of the people doing the Colin’s Bear Animation dance on TikTok had never seen it before. Around 25% of TikTok’s user base wasn’t active online, or even alive, in 2008.

By definition, memes mutate and find new life and meaning over time. I’m just happy to see it keep evolving.

when colin animates a bear
idk ive never seen the video