With questionable copyright claim, Jay-Z orders deepfake audio parodies off YouTube

On Friday, I linked to several videos by Vocal Synthesis, a new YouTube channel dedicated to audio deepfakes — AI-generated speech that mimics human voices, synthesized from text by training a state-of-the-art neural network on a large corpus of audio.

The videos are remarkable, pairing famous voices with unlikely dialogue: Bob Dylan singing Britney Spears, Ayn Rand and Slavoj Žižek dueting Sonny and Cher, Tucker Carlson reading the Unabomber Manifesto, Bill Clinton reciting “Baby Got Back,” or JFK touting the intellectual merits of Rick and Morty.

Many of the videos have been remixed by fans, adding music to create hilarious and surreal musical mashups. Six U.S. presidents from FDR to Obama rap N.W.A.’s, Fuck Tha Police, George W. Bush covers 50 Cent’s In Da Club, Obama covers Notorious B.I.G.’s Juicy, and my personal favorite, Sinatra slurring his way through the Navy Seal copypasta, a decade-old 4chan meme.

Videos Taken Offline

Over the weekend, for the first time, the anonymous creator of Vocal Synthesis received a copyright claim on YouTube, taking two of his videos offline with deepfaked audio of Jay-Z reciting the “To Be or Not To Be” soliloquy from Hamlet and Billy Joel’s “We Didn’t Start the Fire.”

According to the creator, the copyright claims were filed by Roc Nation LLC with an unusual reason for removal: “This content unlawfully uses an AI to impersonate our client’s voice.”

Both videos were immediately removed by YouTube, but can still be viewed on LBRY, a decentralized and open-source publishing platform. Two synthetic Jay-Z videos remained online, in which he raps the Book of Genesis and the Navy Seal copypasta. (Update: The videos were temporarily reinstated by Google. See updates below.)

The video’s creator announced the takedown in a creative way: using the voices of Barack Obama, Donald Trump, Ronald Reagan, JFK, and FDR.

Here’s an excerpt transcript from the video:

“Over the past few months, the creator of the channel has trained dozens of speech synthesis models based on the speech patterns of various celebrities or other prominent figures, and has used these models to generate more than one hundred videos for this channel. These videos typically feature a synthetic celebrity voice narrating some short text or a speech. Often, the particular text was selected in order to provide a funny or entertaining contrast with the celebrity’s real-life persona.

“For example, some of my favorites are George W. Bush performing a spoken-word version of “In Da Club” by 50 Cent, or Franklin Roosevelt’s powerful rendition of the Navy Seals Copypasta.

“The channel was created by an individual hobbyist with a huge amount of free time on his hands, as well as an interest in machine learning and artificial intelligence technologies. He would like to emphasize that all of the videos on this channel were intended as entertainment, and there was no malicious purpose for any of them.

“Every video, including this one, is clearly labeled as speech synthesis in both the title and description. Which brings us to the reason why we’re delivering this message.

“Over the past two days, several videos were posted to the channel featuring a synthetic Jay-Z rapping various texts, including the Navy Seals Copypasta, the Book of Genesis, the song “We Didn’t Start the Fire” by Billy Joel, and the “To Be Or Not To Be” soliloquy from Hamlet.

“Unfortunately, for the first time since the channel began, YouTube took down two of these videos yesterday as a result of a copyright strike. The strike was requested by Roc Nation LLC, with the stated reason being that it, quote, “unlawfully uses an AI to impersonate our client’s voice.”

“Obviously, Donald and I are both disappointed that Jay-Z and Roc Nation have decided to bully a small YouTuber in this way. It’s also disappointing that YouTube would choose once again to stifle creativity by reflexively siding with powerful companies over small content creators. Specifically, it’s a little ironic that YouTube would accept “AI impersonation” as a reason for a copyright strike, when Google itself has successfully argued in the case of “Authors Guild v. Google” that machine learning models trained on copyrighted material should be protected under fair use.”

No Intent to Deceive

At its core, the controversy over deepfakes is about deception and disinformation. Earlier this year, Facebook and Twitter banned deepfakes that could mislead or cause harm, largely motivated by their potential impact on the 2020 elections.

Though it’s worth nothing that the use of deepfakes for fake news is largely theoretical so far, as Samantha Cole covered for VICE, with most created for porn. (And, no, Joe Biden sticking his tongue is not a deepfake.)

In this case, there’s no deception involved. As he wrote in his statement, every Vocal Synthesis video is clearly labeled as speech synthesis in the title and description, and falls outside of YouTube’s guidelines for manipulated media.

Copyright and Fair Use

With these takedowns, Roc Nation is making two claims:

  1. These videos are an infringing use of Jay-Z’s copyright.
  2. The videos “unlawfully uses an AI to impersonate our client’s voice.”

But are either of these true? With a technology this new, we’re in untested legal waters.

The Vocal Synthesis audio clips were created by training a model with a large corpus of audio samples and text transcriptions. In this case, he fed Jay-Z songs and lyrics into Tacotron 2, a neural network architecture developed by Google.

It seems reasonable to assume that a model and audio generated from copyrighted audio recordings would be considered derivative works.

But is it copyright infringement? Like virtually everything in the world of copyright, it depends—on how it was used, and for what purpose.

It’s easy to imagine a court finding that many uses of this technology would infringe copyright or, in many states, publicity rights. For example, if a record producer made Jay-Z guest on a new single without his knowledge or permission, or if a startup made him endorse their new product in a commercial, they would have a clear legal recourse.

But, as the Vocal Synthesis creator pointed out, there’s a strong case to be made this derivative work should be protected as a “fair use.” Fair use can get very complicated, with different courts reaching different outcomes for very similar cases. But there are four factors judges use when weighing a fair use defense in federal court:

  1. The purpose and character of the use.
  2. The nature of the copyrighted work.
  3. The amount and substantiality of the portion taken.
  4. The effect of the use upon the potential market.

There’s a strong case for transformation with the Vocal Synthesis videos. None of the original work is used in any recognizable form—it’s not sampled in a traditional way, using an undisclosed set of vocal samples, stripped from their instrumentals and context, to generate an amalgam of the speaker.

And in most cases, it’s clearly designed as parody with an intent to entertain, not deceive. Making politicians rap, philosophers sing pop songs, or rappers recite Shakespeare pokes fun at those public personas in specific ways.

Vocal Synthesis is an anonymous and non-commercial project, not monetizing the channel with advertising and no clear financial benefit to the creator, and the impact on the market value of Jay-Z’s discography is non-existent.

There are questions about the amount and substantiality of the borrowed work. But even if the model was trained on everything Jay-Z ever produced, it wouldn’t necessarily rule out a fair use defense for parody.

Ultimately, there are two clear truths I’ve learned about fair use from my own experiences: only a court can determine fair use, and while it might be a successful defense, fair use won’t protect you from getting sued and the costs of litigating are high.

Interviewing the Creator

As far as I know, this is the most prominent example of a celebrity claiming copyright over their own deepfakes, the first example of a musician issuing a takedown of synthesized vocals, and according to the creator, the first time YouTube’s removed a video for impersonating a voice with AI. (Previously, Conde Nast took down a Kim Kardashian deepfake by claiming copyright over the source video, and Jordan Peterson ordered a voice simulator offline.)

I reached out to the anonymous creator of Vocal Synthesis to learn more about how he makes these videos, his reaction to the takedown order, and his concern over the future of speech synthesis. (Unfortunately, Roc Nation didn’t respond to a request for comment.)


How do you feel about the takedown order? Were you surprised to receive it?
I was pretty surprised to receive the takedown order. As far as I’m aware, this was the first time YouTube has removed a video for impersonating a voice using AI. I’ve been posting these kind of videos for months and have not had any other videos removed for this reason. There are also several other channels making speech synthesis videos similar to mine, and I’m not aware of any of them having videos removed for this reason.

I’m not a lawyer and have not studied intellectual property law, but logically I don’t really understand why mimicking a celebrity’s voice using an AI model should be treated differently than someone naturally doing an (extremely accurate) impression of that celebrity’s voice. Especially since all of my videos are clearly labeled as speech synthesis in both the title and description, so there was no attempt to deceive anyone into thinking that these were real recordings of Jay-Z.

Can you talk a little about the effort that goes into generating a new model? For example, how long does it typically take to gather and train a new model until it sounds good enough to publish?
Constructing the training set for a new voice is the most time-consuming (and by far the most tedious) part of the process. I’ve written some code to help streamline it, though, so it now usually takes me just a few hours of work (it depends on the quality of the audio and the transcript), and then there’s an additional 12 hours (approximately) needed to actually train the model.

Are you using Tacotron 2 for synthesis?
Yeah, I’m using fine-tuned versions of Tacotron 2.

I saw you’ve struggled getting enough dialogue to fully develop some models, like with Mr. Rogers. Have there been other voices you’ve wanted to synthesize, but it’s just too challenging to find a corpus to work from?
Yeah, several. Recently I tried to make one for Theodore Roosevelt, but there’s only about 30 minutes of audio that exists for him (and it’s pretty poor quality), so the model didn’t really come out well.

The Crocodile Hunter (Steve Irwin) is another one I really want to do, and I can find enough audio, but I haven’t been able to find any accurate transcripts or subtitles yet (it’s very tedious for me to transcribe the audio myself).

How do you decide the voices and dialogue to pair together?
I try to consistently have all my voices read the Navy Seals Copypasta and the first few lines of the Book of Genesis, since it’s easier to hear the nuances of each voice when I can compare them to other voices reading the same text. Other than that, there’s no real method to it. If I have an idea for voice/text combination that I think would be funny or interesting enough to be worth the effort of making the video, then I’ll do it.

What do these videos mean to you? Is it more of a technical demonstration or a form of creative expression?
I wouldn’t really consider my videos to be a technical demonstration, since I’m definitely not the first to make realistic speech synthesis impersonations of well-known voices, and also the models I’m using aren’t state-of-the-art anymore.

Mainly, I’m just making these videos for entertainment. Sometimes I just have an idea for a video that I really want to exist, and I know that if I don’t make it myself, no one else will.

On the more serious side, the other reason I made the channel was because I wanted to show that synthetic media doesn’t have to be exclusively made for malicious/evil purposes, and I think there’s currently massive amounts of untapped potential in terms of fun/entertaining uses of the technology. I think the scariness of deepfakes and synthetic media is being overblown by the media, and I’m not at all convinced that the net impact will be negative, so I hoped that my channel could be a counterexample to that narrative.

Are you worried about the legal future for creative uses of this technology?
Sure. I expect that this technology will improve even more over the next few years, both in terms of accuracy and ease of use/accessibility. Right now it seems to be legally uncharted waters in some ways, but I think these issues will need to be settled fairly soon. Hopefully the technology won’t be stifled by overly restrictive legal interpretations.

It seems inevitable that, at some point, an artist’s voice is going to be used for other uses against their will: guesting on a track without permission, promoting products they aren’t paid for, or maybe just saying things they don’t believe. What would you say to artists or other public figures who are worried that this technology will damage their rights and image?
There are always trade-offs whenever a new technology is developed. There are no technologies that can be used exclusively for good; in the hands of bad people, anything can be used maliciously. I believe that there are a lot of potential positive uses of this technology, especially as it gets more advanced. It’s possible I’m wrong, but for now at least I’m not convinced that the potential negative uses will outweigh that.


Thanks to the anonymous creator of Vocal Synthesis for their time. You can subscribe to the YouTube channel (for now) for new videos, follow updates and remixes in the /r/VocalSynthesis subreddit, and the video mirror on LBRY.

Update: I just heard from Vocal Synthesis’s creator that the copyright strike was removed, and both videos are back on his channel. I initially suspected that Roc Nation dropped the copyright claim, but Nick Statt at The Verge reported that Google reviewed the DMCA takedowns.

“After reviewing the DMCA takedown requests for the videos in question, we determined that they were incomplete,” a Google spokesperson tells The Verge. “Pending additional information from the claimant, we have temporarily reinstated the videos.”

If Roc Nation provides the missing information to complete the DMCA requests, the videos will go offline again. Or, given the press coverage, they may choose to let it go. We’ll see!

Paste Parties: The Ephemeral, Chaotic Joy of Random Clipboards

Yesterday was my birthday, and like I’ve done for the last four years, I posted a single tweet that instantly destroyed my mentions for over 24 hours.

That tweet kicked off a paste party with over 2,000 replies, a potpourri of pure chaos and joy.

Random strings from emails and chat, passwords and 2FA tokens to unknown apps, screenshots and photos, obscure Unicode characters, dollar amounts from spreadsheets, bits of text in languages from Python to Esperanto, and so many links to articles, songs, videos, tweets, and obscure web pages.

It’s a momentary snapshot of digital ephemera, to be used and immediately discarded, much of it never meant to be seen by anyone and stripped of all context.

I first saw this idea in a private file-sharing/discussion community, and tried it on Twitter back in 2012, giving away copies of games and movies to people who replied with the contents of their clipboard. (Those attempts netted 14 and 24 replies, respectively, but Twitter won’t show threaded replies for older tweets.)

But the idea goes back much further. Discussion forums and message boards have played variations of the “Ctrl+V Game” (or “Ctrl+V Threads”) since at least the early 2000s. Some of them ran for years, like this 12-year-long thread from Ants Marching with 4,500 replies.

The earliest examples I found are this Usenet thread from May 2001 (thanks, Ben!) and this thread from October 2001, but pre-2001 digital archives are hard to search these days. I wouldn’t be surprised if this idea went back to forums, Usenet, and BBSes in the ’80s or ’90s. (Add a comment if you know more!)

Without context, everything seems more mysterious. You wonder what it meant, or why someone had it in their clipboard.

https://twitter.com/aidanz/status/1253069039875784719

It’s a great way to discover interesting links to music, video, articles, and web pages, because if it was in someone’s clipboard, it probably means they found it interesting enough to send to someone.

Our clipboards show temporary glimpses of work in progress, whether it’s art, design, or code.

And so many good videos.

https://twitter.com/billyterr/status/1253089776938311680

It’s also a snapshot of a moment in time: we’re at the height of a global pandemic, and our clipboards reflect it in the content we’re copying.

https://twitter.com/aflawson/status/1253113690896949248
https://twitter.com/angledge/status/1253068732030574592
https://twitter.com/bvac/status/1253142259861721099
https://twitter.com/failladrum/status/1253071195924123648

This tiny peek into everyone’s lives — their work, interests, and concerns, or even just the mundane momentary ephemera that’s forgotten two seconds later — is the perfect birthday gift.

Thanks for the presents. See you next year. ✂️📋🎉

Kickstarting Flatter Me: A Compliment Battle Card Game

Three years ago, my wife Ami designed and developed her first game, a charming conversational card game called You Think You Know Me, which went on to sell over 9,000 copies around the world and now close to selling out its second print run.

I loved helping out with the package and card design for You Think You Know Me, a return to my pre-web career in desktop publishing and print production, as well as making the official homepage to support it. (The cards are all CSS!)

The followup to her first game is Flatter Me, a new game where you compete with friends to give compliments, with rules similar to the classic card game of War. It takes literally seconds to learn, explained in full in the project video below.

Each of the 250 cards have a unique compliment on them, which you can give away as little tokens of affection.

Once again, I helped out with the packaging and card designs, and if it hits its goal, you can expect to see a site at flatterme.cards once it’s officially on sale.

I know I’m biased, but Ami’s games have a gentle sweetness that really resonates with me. They’re all designed to bring people together, whether it’s by learning more about people you love or simply by telling them how much they mean to you.

Her games have rules and win conditions like any other card game, but they’re so quick and easy to understand that they become a convenient framework to enrich the connections between friends, family, and partners.

Flatter Me is now funding on Kickstarter, currently at 95% funded (!) with three days to go, and I’d love it if you checked it out or helped spread the word. Thanks!

Bbbreaking News: Discovering Amateur News Videos by Monitoring Journalists on Twitter

If you’ve ever looked at the replies on any newsworthy amateur video posted to Twitter, you’ll see an inevitable chorus of news organizations and broadcast journalists in the replies, usually asking two questions:

  1. Did you shoot this video?
  2. Can we use it on all our platforms, affiliates, etc with credit?

That gave me an idea, which I posted to Twitter.

Within two days, a talented developer named Corey Johnson made it real by launching Bbbreaking News.

I’ve returned regularly since Corey launched it and, as expected, it’s a powerful way of tracking a particular type of breaking news: visual stories with footage captured by normal people at the right place and right time.

Much of it is of interest only to local news channels: traffic accidents, subway mishaps, a wild animal on the loose, the occasional building fire.

But frequently, Bbbreaking News shows the impact of gun violence and climate change: a near-constant stream of active shooter scenarios, interspersed with massive brush fires, catastrophic flooding, and extreme weather events.

It’s a fascinating way to see the stories that broadcast media is currently tracking and viewing their sources before they can even report on it, captured by the people stuck in the middle.

I recommend checking it out. Thanks to Corey for running with the idea and saving me the effort of building it myself!

The Tools I Use: My Setup, 11 Years Later

On January 22, 2009, I linked to Daniel Bogan’s newly-launched Uses This (then called “The Setup”), an interview series where he asks interesting people about “the tools and techniques they use to get things done.”

Three days later, Daniel asked me on AOL Instant Messenger if I’d be open to doing an interview myself.

I happily agreed—and then waited nearly 11 years to get around to it, despite his occasional prodding.

Since he first asked me, Daniel’s published over 1,000 interviews with an incredibly interesting group of people spanning dozens of fields and professions.

So I finally sat down and wrote my answers, and the interview is now live.

I can’t say it’s particularly interesting or meaningful, but it might give you a glimpse into how I think about the tools I use to make things.

And Daniel? Thanks for your patience.

How Artists on Twitter Tricked Spammy T-Shirt Stores Into Admitting Their Automated Art Theft

Yesterday, an artist on Twitter named Nana ran an experiment to test a theory.

Their suspicion was that bots were actively looking on Twitter for phrases like “I want this on a shirt” or “This needs to be a t-shirt,” automatically scraping the quoted images, and instantly selling them without permission as print-on-demand t-shirts.

Dozens of Nana’s followers replied, and a few hours later, a Twitter bot replied with a link to the newly-created t-shirt listing on Moteefe, a print-on-demand t-shirt service.

Several other t-shirt listings followed shortly after, with listings on questionable sites like Toucan Style, CopThis, and many more.

Spinning up a print-on-demand stores is dead simple with platforms like GearBubble, Printly, Printful, GearLaunch (who power Toucan Style), and many more — creating a storefront with thousands of theoretical product listings, but with merchandise only manufactured on demand through third-party printers who handles shipping and fulfillment with no inventory.

Many of them integrate with other providers, allowing these non-existent products to immediately appear on eBay, Amazon, Etsy, and other stores, but only manufactured when someone actually buys them.

The ease of listing products without manufacturing them is how we end up with bizarre algorithmic t-shirts and entire stock photo libraries on phone cases. Even if they only generate one sale daily per 1,000 listings, that can still be a profitable business if you’re listing hundreds of thousands of items.

But whoever’s running these art theft bots found a much more profitable way of generating leads: by scanning Twitter for people specifically telling artists they’d buy a shirt with an illustration on it. The t-shirt scammers don’t have the rights to sell other people’s artwork, but they clearly don’t care.

Once Nana proved that this was the methodology these t-shirt sellers were using, others jumped in to subvert them.

Of course, it worked. Bots will be bots.

For me, this all raises two questions:

  1. Who’s responsible for this infringement?
  2. What responsibility do print-on-demand providers have to prevent infringement on their platforms?

The first question is the hardest: we don’t know. These scammers are happy to continue printing shirts because their identities are well-protected, shielded by the platforms they’re working with.

I reached out to Moteefe, who seems to be the worst offender for this particular strain of art theft. Countless Twitter bots are continually spamming users with newly-created Moteefe listings, as you can see in this search.

Unlike most print-on-demand platforms like RedBubble, Moteefe doesn’t reveal any information about the user who created the shirt listings. They’re a well-funded startup in London, and have an obligation not to allow their platform to be exploited in this way. I’ll update if I hear back from them.

Until then, be careful telling artists that you want to see their work on a shirt, unless you want dozens of scammers to use it without permission.

Or feel free to use this image, courtesy of Nakanoart.

Update

Nearly every reply to the official @Disney account on Twitter right now is someone asking for a shirt. I wonder if their social media team has figured out what’s going on yet.

I know I shouldn’t buy them, but some of these copyright troll bait shirts are just amazing.

Image
Image
Image
Image

Adam Koford’s Bootleg Peanuts Reboot

Starting earlier this month, the very talented Adam Koford, the creator of Laugh-Out-Loud Cats webcomic, started posting these wonderful bootleg Peanuts comics to his Twitter account, and continued almost every day since.

Loose and sketchy, they capture the essence of Charles Schultz’ Peanuts so well: sweet and sad, combining childlike wonder and existential dread. As he went on, they started evolving a unique style of their own, distinct from the Peanuts characters but still recognizable.

None of Adam’s comic tweets are threaded, making it hard to link to or catch up on, so I created this Twitter Moment aggregating them all in one place with Adam’s blessing. I embedded the whole thing below.

Needless to say, you should follow Adam on Twitter and Instagram. Just don’t tell the Peanuts estate.

Continue reading “Adam Koford’s Bootleg Peanuts Reboot”

Unraveling the Mystery of “Visit Eroda,” The Tourism Campaign For An Island That Doesn’t Exist

Late last week, people on Twitter started noticing sponsored tweets promoting the island of Eroda, linking to a website advertising its picturesque views, marine life, and seaside cuisine.

The only catch? Eroda doesn’t exist. It’s completely fictional. Musician/photographer Austin Strifler was the first to notice, bringing attention to it in a long thread that unraveled over the last few days.

The Visit Eroda website is full of strange details:

  • It’s dated copyright 2004, but the domain was registered on October 28 of this year.
  • Rotating banner ads on the site are served locally, and just point back to the Eroda homepage.
  • Some mysterious copy. In the description of the Eroda Ferry, “Our recommendation? Avoid leaving Eroda on odd numbered days…” For the fishing charter, “For extra good luck, make sure you wear one gold earring…” And for the Fisherman’s Pub, “The only rule of the bar? Don’t mention a pig in the pub.”
  • A map of the island was apparently generated in Inkarnate, an online fantasy map maker.

Two key facts indicated this was more than just one prankster’s internet goof, and that it was a well-funded viral campaign.

  1. The Eroda site is actively running a large number of ads across Twitter, Facebook, Instagram, YouTube, and Spotify.
  2. The visiteroda.com domain is managed by MarkMonitor, a relatively expensive service primarily used by large companies to manage and protect their domains.

Digging Deeper

Over the weekend, fans of Alternate-Reality Games (ARGs) raced to learn more, sharing information in a newly-created Eroda subreddit, dedicated Discord server, and crowdsourced docs.

The Eroda campaign continued to feed the mystery with a new YouTube video, and mysterious new posts on Twitter, Facebook, and Instagram.

The Daily Dot’s Nashwa Bawab was the first to write about the campaign, with an article on Saturday afternoon about the conspiracy theories.

Personally, I tried every trick I know to identify the owners, with no useful information. I looked at the HTML/CSS source, EXIF metadata for photos on the site, text strings in the map PDF file, IP addresses and server host, current and historical WHOIS and DNS records, reverse IP/WHOIS lookups, robots.txt, XML sitemaps, brute-forcing filenames, Google Analytics IDs, server architecture, ad tracker codes, and social network forensics.

Whoever was behind it covered their tracks well—but not well enough.

Solving the Mystery

One theory emerged from the large and obsessive Harry Styles fandom: Eroda was a promotion for Harry Styles’ upcoming album, Fine Line, due out next month on December 13.

The evidence seemed thin at first, but kept mounting. Among the clues:

  • Many of the photos and video from the Visit Eroda site and social media campaigns appear to have been shot in St. Abbs, a small fishing village on the southeastern coast of Scotland, the same location where Harry Styles was filming an as-yet-unreleased music video last August.
  • One of the cast members in the video sports a very unusual hairdo, elaborate pretzelesque braids. The About Eroda page says, “In particular, Erodean hairstyles have become a rather bold expression of self amongst the island’s youth.”
  • Some of the place names on Eroda may reference the song titles on the album. The Fisherman’s Pub is located “on the corner of Cherry Street and Golden Way,” while the first tracks on Sides A and B of Fine Line are called “Golden” and “Cherry.” The island’s name itself, Eroda, may be a reference to the third song, “Adore You.”
  • Another site launched for the new album, Do You Know Who You Are, was similarly managed by MarkMonitor, with similar coding styles for the CSS.

Any of these could be written off as coincidence.

Until last night, when Ryan J, executive producer of music magazine Down In The Pit, received a Visit Eroda ad on Facebook, and noticed that Facebook reported the ad was served to him because he’d visited Harry Styles’ official website.

This not only confirms the Eroda team is targeting Harry Styles fans, but also a clear ownership link: advertisers can only target Facebook ads to sites they’ve installed the Facebook Pixel tracker on.

In other words, Harry Styles’ official homepage and Visit Eroda are managed by the same people.

Despite all of their efforts at secrecy, the marketing agency behind this viral campaign was exposed by an unexpected source, Facebook’s ad transparency tools.

But Why?

For non-fans, this may be anti-climactic or even confusing. Why would a musician launch a viral campaign like this just to promote a new album?

ARGs and other forms of transmedia storytelling are a creative way to build a world around a piece of art, whether it’s a videogame, TV show, or album, while teasing out details for dedicated fans.

Though more common in games and TV/film, bands like Twenty One Pilots, Nine Inch Nails, and AFI have all used ARGs to promote the launch of concept albums.

For Nine Inch Nails’ Year Zero (2007), clues were hidden in concert t-shirts, USB drives left at shows, and encoded in the audio waveforms in tracks on the album itself, fleshing out Trent Reznor’s vision of the dystopian world of the concept album. The clues led to an exclusive, underground Nine Inch Nails concert for his most dedicated fans.

It’s a way for an artist to express themselves beyond the work itself, and a way to involve a community of fans, joining them together to collectively solve a mystery.

It’s too early to say where this campaign is going, but I expect we’ll know on December 13. Until then, it’s a perfect example of how impossibly hard it can be to keep a secret from a global community of dedicated fans on the internet in 2019.

Updates

December 2. Today, the Visit Eroda account tweeted the teaser trailer for Harry Styles’ “Adore You” music video, resolving the mystery for any lingering skeptics.

Since this started, I’ve participated in the Discord channel and followed each new clue and development. For me, the most interesting part was watching the cultural divide between two fandoms: ARG enthusiasts and Harry Styles stans.

The Discord team was started by ARG fans, but as Harry Styles fans joined looking for new information, it became a constant source of conflict. Admins required nearly all Harry Styles-related discussion to move out of general channels, even as evidence mounted that the campaign was promotion for his album.

Many of the ARG fans, desperate for any explanation beyond Harry Styles, constantly tried to debunk solid proof like the Facebook Pixel connection.

This morning, once the video was released and all doubt removed, it triggered a wave of frustrated farewells as dozens of members quit the Discord, while the Harry Styles fans were more excited than ever.

If the goal was to energize his fan base for the release of new material, the Eroda campaign was an unmitigated success.

I know many ARG enthusiasts were hoping for something deeper, but as someone with no interest in his music, I’m still grateful to the creative team behind the island of Eroda for making the internet just a bit more mysterious, if only for a week or two.

December 6. The full “Adore You” music video premiered this morning, telling the full story of Eroda. Great song, great video.

Billboard interviewed some of the digital team at Columbia Records behind the campaign.

Turning Photos into 2.5 Parallax Animations with Machine Learning

For years, filmmakers have used 2.5D parallax to make static photos feel more dynamic, as in The Kid Stays In The Picture, the 2002 documentary about film producer Robert Evans that popularized the technique.

Traditionally, a video editor would use Photoshop to isolate the photo elements on separate layers, fill in the removed objects to complete the background, and animate the layers in a tool like After Effects. Here’s a typical tutorial, showing the time-consuming and tedious process.

Last September, a team at Adobe Research released a paper and video demonstrating a new technique for animating a still image with a virtual camera and zoom, adding parallax to create what they call “the 3D Ken Burns effect.”

This new technique uses deep learning and computer vision to turn a static photo into a 2.5D parallax animation in seconds, using a neural network to estimate depths, render a virtual camera through space, and fill in the missing areas.

On Monday night, researcher Simon Niklaus finally got permission to release the code, posting it to Github with a CC-NC license, allowing anyone to experiment with it for themselves.

Sample Animations

It’s incredibly fun to play with. I ran some famous images through it, and then put a call out to Twitter for more ideas. Here are the results. Click anywhere on the video to play it.

John Rooney/Associated Press, Ali vs. Liston (1965)
Alfred Eisenstaedt, V-J Day in Times Square (1945)
Elvis Presley and Richard Nixon (1970)
Pete Souza, Situation Room
Matt McClain/Washington Post, Gordon Sondland testifies
Disaster Girl
The Unexplainable Picture

Surprisingly, it even works on illustrations and paintings.

Martin Handford, Where’s Waldo
Georges Seurat, A Sunday Afternoon on the Island of La Grande Jatte

Try It Yourself

Unlike the Spleeter library, the Ken Burns 3D library requires using PyTorch with an Nvidia GPU with CUDA drivers. Sadly, Apple phased out CUDA support in Mojave, but there’s an even easier way to play around with it.

I created a Google Colab notebook here, which will let you process images on Google’s GPUs entirely in your browser.

If you’re unfamiliar with Colab, it can be a bit intimidating. Try this, and let me know if you get stuck.

  1. Open the Google Colab notebook.
  2. Click File->Open In Playground Mode to run it yourself.
  3. Click “Connect” to connect to a hosted runtime, a temporary Google server where the commands will run.
  4. From the “Runtime” menu, click “Run All.” A warning will pop up, you can click “Run Anyway.”
  5. On the left-hand side of the window, click the tab to open the Files sidebar.
  6. The final command processes the “doublestrike.jpg” sample image, and generates a new file in the /images directory called “autozoom.mp4.”
  7. Upload your own images by right-clicking the “images” folder and clicking Upload. Change the input/output filenames, and click the play button to run the final command again.

Good luck!

Update: This Google Colab notebook by Manuel Romero is much faster and easier to use, with a handy widget to upload files, process images in bulk, and download all the finished animations.

The Deletion of Yahoo! Groups and Archive Team’s Rescue Effort

Yahoo! Groups circa July 2001

On December 14, everything ever posted to Yahoo! Groups in its 20-year history will be permanently deleted from the web. Groups will continue running as email-only mailing lists, but all public content and archives — messages, attachments, photos, and more — will be deleted.

You have until then to find your Yahoo login, sign into their Privacy Dashboard, and request an archive of your Yahoo! Groups.

For me, it took ten full days to get an email that my archive was ready to download — are they doing this by hand!? — but it appears complete: it contained a folder for every group I belonged to, each containing their own ZIP files for messages, files, and links.

The messages archive is a single plain-text file in Mbox email format with every message every posted to the group. That’s enough for me, but if you wanted, you could import into Thunderbird or any other mail app that support Mbox.

In the late ’90s and early 2000s, I belonged to several Yahoo! Groups (and its earlier incarnation, eGroups) for niche online communities, former jobs, small groups of friends, and weird internet side projects. Until the launch of Google Groups, it was the de facto free way to easily set up a hosted mailing list and discussion forum.

The Archive Team wiki charts the rise and fall of Yahoo Groups, showing a peak in 2006, and rapid fall after that.

Yahoo groups date created.png

Many of these private groups are effectively darkweb, accessible only to members of the group. If you don’t save a copy of the private groups you belong to, it may very well be lost for good.

Archive Team’s Rescue Effort

As you’d expect, the volunteer team of rogue archivists known as Archive Team are working hard to preserve as much of Yahoo! Groups as possible before its shutdown.

Their initial crawl discovered nearly 1.5 million groups with public message archives that can be saved, with an estimated 2.1 billion messages between them. As of October 28, they’ve archived an astounding 1.8 billion of those public messages.

Unfortunately, archiving the files, photos, attachments, and links in those groups is much harder: you have to be signed in as a member to view that content, which requires answering a reCaptcha. If you’d like to help answer reCaptchas, they made a Chrome extension to assist with the coordination effort.

If you’d like to nominate a public Yahoo! Group to be saved by Archive Team, you can submit this form. If you’d like them to archive a private group, you can send a membership invite to this email address and it’ll be scheduled for archiving. More details are on the wiki.