Weeknotes 44

My first TrustCon: sextortion, generative AI and criminal networks behind NCII

Hera Hussain
12 min readAug 2, 2024
A picture of Blue California sky looking over the Bay bridge

What I did

I first became aware of Trust & Safety Professional Association (TSPA) in 2022 and I couldn’t attend the conference in 2023 because I was heavily pregnant and avoiding travel. I vowed to go in 2024. Inititally my plan was to go to the one in Dublin aswell but I was too ambitious with coming back from maternity leave so I decided to go for just the larger global conference instead. I tried my best to attend the conference with a teething baby (the only baby there).

People were amused to see the baby and played with Ardeshir — mostly women. They opened doors for me. Stood up to give me their seat if the room was full. Helped me put food on my plate or press the button on the escalator if I had my hands full. Staying at the same venue as the conference was the best choice as it meant I could nip to the room to give him some stimulus break and also hand him over to my husband for stretches of time.

What I learned

I view conferences as either a learning or a community building opportunity — ideally both (but it’s hard to get that). TrustCon managed to do both. I was struck by how open and candid the conversations were as it was amongst peers — trust and safety professionals working in the technology space without feeling like they are on duty to respond to grilling questions from civil society and press. It’s important that all sectors have such spaces. For me, RightsCon is that place for activists working on digital rights.

However, I do think TrustCon would benefit from having more civil society presence (especially those like us who have been working on digital rights for years) in sessions to add our experience, ideas and also so that there are opportunities to cross-polinate our spaces.

Three examples of why I say this:

  • In a session on image-based abuse, I raised the point of how survivors who don’t
  • In session about generative AI and open source models, the speakers were talking about a range of harms can come from 4-C-h-a-n type spaces but failed to mention the visceral threat of the manosphere which has resulted in targeted killing of women and mass shootings.
  • Another domestic violence advocate mentioned in a session the speaker (out of ignorance rather than prejudice) was talking about mass violence and a particular incident and then quipped that it was different than routine cases of domestic violence (as if it’s less targeted or violent). If a civil society representative was on the panel, they could have addressed how that’s not a helpful framing and far from the truth.

My other take-away which would be useful for the TrustCon programme committee is that it was excellent to see a whole track dedicated to child safety but I expected there also to be consideration on gender (and how women and gendered minorities are at a proven higher risk of abuse and violence) in the same sessions and it was rarely the case. Apart from the sessions on image-based abuse, I didn’t hear gender being mentioned even once. Maybe it’s a just coincidence and I happen to go to these sessions where this didn’t come up but I though it was worth saying.

I would have liked to attend more sessions than I did but sometimes they got full and other times, it didn’t match with Ardeshir’s mood or needs. I’m picking out some of sessions I went to and the notes I took on it (I’m using NCII when talking about image-based abuse here as the event was in US and that’s what the speakers were using so I jotted that down in my notes).

A Revealing Picture: Inside the Flourishing Online Market for AI-Generated Non-Consensual Nude Imagery

The proliferation of generative AI technologies has transformed the threat landscape for Trust & Safety (T&S) teams, particularly through the rise of synthetic non-consensual intimate imagery (NCII) and the harms to adults and children that come with it. This presentation will discuss findings from recent investigations into the synthetic NCII landscape, highlighting its evolution from a niche online marketplace to a fully fledged digital industry driven by key actors with over 28 million unique visitors a month. In this presentation, we will present findings from a recent series of investigations, exploring specific abusive behaviors by synthetic NCII providers and detailing how they exploit online platforms across the stack to scale and monetize their activities. We will emphasize practical takeaways for T&S teams and propose policy and enforcement strategies that target the actors enabling this rise in image based abuse, in addition to the content alone.- Santiago Lakatos, Venable Blue

This was a fascinating presentation which I enjoyed from the floor while I kept Ardeshir busy playing and cooing with his toys. Santiago Lakatos had a comprehensive presentation about the prevalence of image-based abuse which had been done in partnership with Graphika. The full report is here.

  • Many creators use affiliate marketing like any other commercial product with micro-influencers happily using affiliate links as they share profiles of deepfake producers and unclothing apps. The volume of referral link spam for these services has increased by more than 2,000% on platforms including Reddit and X since the beginning of 2023
  • A set of 52 Telegram groups used to access NCII services contain at least 1 million users as of September this year.
  • It’s easy to find undress maps — they can’t operate at scale in hiding. They can’t operate on dark web. They need the open web to find customers. This is no longer niche.
  • 50% traffic went to 3 clusters of NCII operators — owned by one organisation or individual. This makes it very vulernable to disruption.
  • Clothoff is not an outlier (I’ve written about this before) and without intervention, NCII creations will increase.
  • Creating NCII is such good business (🤮) that websites that offer AI generated porn are now offering nudify tech as a premium feature. disguising themselves as ai art service — redirected to new page for
  • The researchers suggest that the industry should take an actor-based approach (going after the creation of these apps and deepfake creators) not just content-based approach (taking down the content) and adapt policies so that promotion of these services (banned under most platforms) is actively enforced. And that platforms work together to take these highly networked criminal enterprises down together as they are multi-platform actors.
  • I asked a question about whether survivors copyrighting their own image would make platforms, legislators and law enforcement take quicker action because copyrighted music is taken down within minutes.
  • Someone else asked him a question about a good NII policy and Reddit’s policy got a mention though it has issues around link sharing.

Taylor Swift, Leonardo DiCaprio & Tom Cruise: No, We’re Not Trying To Sell You Cookware

Threat actors use deepfake technology for identity theft, catfishing, spreading mis- and dis- information, and generating NCII, CSAM, and CSEM. While regulation is clear surrounding identity and likeness theft, it’s far less clear in regards to generated explicit material. In this presentation, Ally Madrone will discuss deepfake creation, how threat actors are using this technology, and strategies T&S teams can employ to determine whether a particular piece of content is legitimate or generated, including video injection detection, presentation attack detection, and GAN image detection. — Ally Madrone

This was an excellent presentation by Ally about a history (albeit most references were from the West) of deepfakes as well as how they are made and can be detected.

It was “the uncanny valley”(TLDR: CGI that isn’t good will be creepy to us) until 2014’s work by Ian Goodfellow on general adversarial networks which led the unrealistic realm of CGI’s to sophisticated machine learning> This took us from fully synthetic images to face swapping (so common that all social media platforms have a form of filter that does this).

I enjoyed the talk so much that I didn’t take many notes but here are some incidents that got a shout out: 2017 scarlett johansson and taylor swift — deepfake coined by reddit forum, 2018 BAN, ad on reddit targetting NCII, 2018 sir jordan peel deep, 2019 meta didnt take down deepfakes until Mark Zuckerburg’s video, and 2019 audio deepfake scam affecting the banking sector. In the last three years, the deepfake technology has improved significantly from the believable Zelenksy deepfake in 2022 to generative AI wrecking havoc on elections in 2023 and 2024.

Uncovering the Role of Yahoo Boys in the Rise of Financial Sextortion Targeting Minors

Financial sextortion is the fastest growing crime targeting children in North America and Australia — accelerating at an alarming rate, with incidents surging up 1,000% in the past 18 months. In a December 2023 hearing, FBI Director Wray warned Congress that sextortion is “a rapidly escalating threat,” and teenage victims “don’t know where to turn.” This report reveals that virtually all of the financial sextortion targeting minors today is directly linked to a distributed West African cybercriminal group called the Yahoo Boys. Additionally, this investigation unveils previously unreported views into the social media platforms where these criminals share their sextortion scripts, tools, and methods, which has allowed this crime to proliferate at an exponential rate. — Paul Raffile

This was a somber presentation and many of us could be heard gasping at different points. Paul’s presentation was based on the report he did alongside Alex Goldenberg, Cole McCann, and Joel Finkelstein Chief from the Network Contagion Research Institute. It also had updates from everything that has happened since. Paul told us he got started on this path because a friend of his was the victim of financial sextortion who sought his help and then he realised how big of a problem it is.

  • Financial sextortion is the most rapidly growing crime targeting children in the United States, Canada, and Australia. Surprisingly, nearly all of this activity is linked to West African cybercriminals (who used to do the “Nigerian prince” scams) known as the Yahoo Boys, who are primarily targeting English-speaking minors and young adults.
  • Most children are targetted on Instagram, Snapchat, and Wizz (47% of their users have experienced sextortion on the platform). There tenfold increase of sextortion cases in the past 18 months (from Jan 2024) is a direct result of the Yahoo Boys distributing sextortion instructional videos and scripts on TikTok, YouTube, and Scribd, enabling and encouraging other criminals to engage in financial sextortion. They are actively sharing how they do it and people are learning from them. It’s all out in the open. Many of these videos have been reported by Paul but he says platforms are very slow to take them down despite it clearly reporting their policies.
  • The sextortion criminals are “bombing” high schools, youth sports teams, and universities with fake accounts, using advanced social engineering tactics to coerce their victims into a compromising situation. What they do is that they will start adding everyone at a school as a friend and some kids will accept them and follow back. Then the criminals will message a victim pretending to be someone from their school. The victim will see they have common connections and add it. Then the sexting may start or deepfakes. The friend bombing technique is something many platforms have actioned on by making friend lists private or flagging if an adult with no common connections to a child is trying to DM or friend them.
  • Generative Artificial Intelligence apps are already being used to target minors in a fraction of sextortion-at-scale operations.
  • 11 boys have taken their life in the US, Canada and UK after being victim of Yahoo Boys.
  • The shortest time from receiving the first threatening message and suicide was 27 mins.
  • In Nigeria, Google trend shows “how to blackmail with pictures” was trending 850%
  • What happens if you pay? They keep asking for more. They will not stop.

Some recommendations from Paul on how to fight this:

  • Make friends list private by design for children.
  • Remove Yahoo Boys from platforms and deem it an organised criminal group in the same way that terrorist organisations are.
  • Sextortion related complaints should go to the top of the que of trust and safety team response as we know that 27 mins is the shortest period for a child to take their own life. In this time period, the victim had received 200 threatening messages.
  • For snapchat: friending remains the first risk factor. Telling users that a person has been recently been reported — do you still want to add them would help children stay safe.

Some other things I looked up after the session:

  • US and Canada: Instagram and Snapchat are the most common platforms used for sextortion. The two countries from which sextortion perpetrators are often operating, Nigeria and Cote d’Ivoire, make use of slightly different tactics and platforms. Perpetrators leverage tactics to intentionally fan a victim’s worry about the life-changing impacts of their nudes being shared — often repeating claims that it will “ruin their life.” — Thorn 2024
  • 8% of German and 6% of French young people think sharing nude imagery is normal (Thorn 2024)
  • USA: In 2022, the top five platforms ever used by minors were YouTube (98%), TikTok (82%), Minecraft (80%), Roblox (72%), and Fortnite (71%) (Fig 1). The three platforms that showed the most notable increases in usage rates among minors in 2022 were Fortnite (+14%), Roblox (+13%), and Discord (+12%) — Thorn 2022
  • USA: 1 in 3 minors reported having an online sexual interaction. 29% of all minors reported having an online sexual interaction with someone they believed to be an adult. 1 in 5 were asked to send a nude image. 1 in 4 received sexual messages. About 25% of children aged 9–17 reported that it was common for peers their age to share nude images. When looking at teenagers specifically, we see an increase in perceived normalcy, with 32% reporting that is was common for their peers to engage in nude sharing behaviours. While roughly 1 in 7 minors admit to having shared their own SG-CSAM. — Thorn 2022

Financial Grooming: Known as “Pig Butchering” by Criminal Enterprise

“Pig Butchering” has been a growing area of concern on social media platforms over the past couple of years. The problem has two criminal elements that includes a combination of Human Trafficking and Financial Grooming. Exploring this issue is also a good time to discuss trauma informed language (socializing a change in language from “pig butchering” to “financial grooming.”) These three elements form the foundation of this panel. — John Bridge, Liz Buser, Mechelle B Moore, Erin West, Lucia Stacey Harris

I had never heard the expression “Pig butchering” so it was new to me and I kind of winced everytime it was said. There was a discussion on whether this is a term (common in the US) that should continue to be used or not. They said victims dont like using it and frontline practitioners dont like it either as it’s not trauma-informed language but many continue to use it because the press doesn’t like covering the story unless it is used. Confusingly, Tiktok calls it crypto scams as they say on their platform the financial grooming has a crypto element. A speaker cited a research that found if you are aware of a scam, you are 60% less likely to fall for it. We also heard about bystander reporting and how platforms not flexible enough which makes it harder for them to support survivors and carry some of the administrative burden.

Stolen Nudes for Sale: Currency and Trade of Non-Consensual Intimate Images

Little is known about the bad actors in distributing non-consensual intimate images online in exchange for monetary gains, and how they leveraged online platform policy vulnerabilities to avoid detection. This study explored the hidden community of ‘baiters’ and buyers of non-consensual intimate images (NCII), and document how bad actors evade tech platform policy enforcement against non-consensual intimate images. Fifteen actors were selected through snowballing technique. Data were collected through patron interviews and undercover attempt to purchase non-consensual intimate images. Results had shown that 1) baiters/sellers use social networking sites to attract potential buyers for their baits; 2) both baiters/sellers and buyers move to at least 5 different online platforms to facilitate a transaction with no trace as possible: from selection of baits, to closing the deal, payment, to sharing of NCII to buyers. Authors put together recommendations to help strengthen tech enforcement against non-consensual intimate images. — Philip Tanpoco Jr

This session didn’t allow press so I didn’t take any pictures and I contacted the author to get the study. Their submitting it to a journal so aren’t sharing yet but will send it to me once it’s been published!

Other

I attended a session briefly which now I cannot remember no matter how many times I look through the agenda but I picked up on one new thing. There’s an AI model called GPT 4CHAN (blocked now by Hugging Face) which was trained on the most problematic posts on 4-C-H-A-N so you can imagine the kind of responses it was churning out. Here’s a video from the maker about it. Despite it’s problematic nature, GitHub has kept its source code up on and not blocked it.

Apart from this, I was part of great discussions at the event and while I have no notes on them, I just wanted to say that those were all stimulating including a mixer organised for civil society by someone Chayn works closely with.

— —

Something else

Much thanks to TSPA for arranging a lactation room which was roomy, comfortable and the biggest one I’ve ever seen.

--

--

Hera Hussain

Building communities. Feminist. Pakistani. Founder @chaynHQ & CEO fighting gender-based violence with tech. Championing openness. Forbes & MIT Under 30/35.