The Problem with NGOs and "Derad"
The US Department of Homeland Security spends $10-20 million each year on a grant program meant to prevent violent extremsim. The problem? It doesn't.
Back in December, I broke a story about a project called Diverting Hate which was funded by a DHS counterterrorism grant. The NGO behind the project teamed up with @X and YouTube to devise tools used to "divert" audiences away from targeted influencers (@RationalMale, @FreshandFitPod, @whatever, @pearlythingz and many more) and reroute them to state-sponsored infuencers and podcasts. The goal of the tech, which mirrors Google's notorious Redirect method, is to detect and suppress "hate speech" under the guise of terrorism prevention.
The US Department of Homeland Security Targeted Violence and Terrorism Prevention Grant awarded nearly $700,000 to Arizona State University, home of the McCain Institute, for the Diverting Hate program. One purpose of the program was to design a native tool to be used on Twitter/X to effectively suppress individuals in the manosphere by diverting their audiences away from their content towards "counter-messaging" content - websites, podcasts and creators curated by the McCain Institute to counter "hate speech and misinformation."
The Diverting Hate biannual reports contained several long articles about why manosphere-adjacent ideas are harmful and how they spread on social media. The reports then advocate for even more invasive intervention tools to be designed via a collaboration between social media platforms and state-funded NGOs. These reports had been made accessible to the public via a Google Drive link on the Diverting Hate website. In an apparent response to my reporting in December, Diverting Hate removed the links to the reports and also made the Google Drives inaccessible to the public.
After a bit of digging I have found their most recent report from March of 2024, where they quantify the success of their suppression campaign and name even more targeted creators. I will provide a full breakdown of this report for paid Substack subscribers in my next post (please understand that I put hours and hours of unpaid work into my investigations! Thank you for supporting!)
Otherwise, feel free to read the Diverting Hate report for yourself here:
(EDIT - Within 30 minutes of my reporting, Diverting Hate restricted access to their report. I’ve uploaded it to my own Google Drive here:)
https://drive.google.com/file/d/1-i_NeesyWl_p7blfVLAYBdhkn05bjCHQ/view
But first, take a look at what I uncovered back in December:
The United States Counterterrorism Apparatus
The DHS Targeted Violence and Terrorism Prevention (TVTP) grant provides FEMA funding to non-governmental organizations (NGOs) in order to “establish or enhance capabilities of targeted violence and prevention” for the US government. Formerly known as the Countering Violent Extremism (CVE) grant under the Obama Administration, this program has been criticized by human rights and government accountability organizations for years due to its reliance on debunked and unscientific “risk factors” and “behavioral indicators” of radicalization. In its earlier years, critics argued that Muslim Americans were unfairly targeted and their communities further destabilized as a result of the program. They also criticized DHS’s lack of oversight and transparency and its repeated failure to demonstrate the effectiveness of the program in reducing targeted violence in the United States. Over the past several years, the threat landscape according to DHS has shifted dramatically to hone-in on white supremacy extremism and far-right extremism, and even more recently, to include anti-government movements and the manosphere.
Though the TVTP program has publicly shifted away from targeting people of color, human rights groups like the Brennan Center for Justice continue to lambaste the program, claiming it is not more equitable or effective today. They argue that such programs tend to emphasize the expansion of surveillance authority of local and federal law-enforcement, actions that still disproportionately affect people of color and those in inner cities. Others have pointed out that the definitions of terrorism and radical extremism have become dangerously broad over the years. In addition, many of the programs increase the authority of federal law enforcement within individual states by building threat assessment and management teams and fusion centers, raising the concern that federal agencies may use that authority to micromanage state and local investigations.
The Biden administration empowered the McCain Institute, in partnership with the Anti-Defamation League (ADL) and the Institute for Strategic Dialogue (ISD), to build the Prevention Practitioners Network (PPN), a network that includes first responders, law enforcement, and clinicians involved in terrorism prevention. By the PPN’s own definition, targeted violence must be ideologically driven and excludes acts of interpersonal violence, street or gang-related crimes, organized crime, and financially motivated crime. However, in a recent PPN guidance for building prevention framework, they begin by stating, “645 mass shootings terrorized the United States in 2022, and the country is on track to exceed that total in 2023.” PPN obtained this number of mass shootings from the Gun Violence Archive, which includes domestic shootings, gang and drug related shootings, interpersonal and workplace violence, and other motivators of violence completely outside the scope of targeted violence by the PPN’s own definition. According to the Violence Project Mass Shooter Database, the only database that lists known motivations of shooters, only 3 mass shootings in 2022 would fall into the category of targeted violence. For comparison, the FBI reported there were 21,156 total murders in the US in 2022.
Nevertheless, President Biden declared white supremacy ‘the most dangerous terrorist threat to the American homeland’ in May of 2023. Reflecting this, the McCain Institute PPN Practitioner’s Guide lays out the four overarching threat areas as white supremacy, anti-government movements, internationally-inspired terrorism, and the Manosphere. The definitions provided in this guidance are vague and overly broad. They lump a large number of law-abiding Americans citizens in with recognized terror threats such as ISIS and Al-Qaeda. The description for Anti-government movements, for example, includes people who believe the government infringes too much on personal freedoms and liberties.
Excerpt from McCain Institute’s Preventing Targeted Violence and Terrorism: A Guide for Practitioners.
START and PIRUS
The START consortium, a DHS affiliated organization led by the University of Maryland, is another example of how bad data can be used to influence the nature of counterterrorism programs as well as bolster political agendas. START maintains a database of individual radicalization profiles called PIRUS. START and the PIRUS database, which form much of the basis for targeting terrorist and extremist groups in the United States, have been shown to contain factually incorrect information and misleading data. For example, START classified Stephen Paddock, the alleged gunman in the Las Vegas Route 91 massacre, as an anti-government terrorist, despite the FBI officially announcing they found no motive linked to the shooting. Inaccuracies such as these call into question the rest of the database.
https://www.start.umd.edu/data-tools/profiles-individual-radicalization-united-states-pirus
In March 2023, SMART announced that 995 new individual profiles had been added to the PIRUS database. Their report reads, “In 2021, nearly 90% of the offenders included in PIRUS were affiliated with the extremist far-right—the highest percentage of any year recorded in the database.” And yet they concede that the majority of these profiles came from the January 6 Capitol riot, despite the fact that none of the January 6 protesters were charged with terrorism. Nevertheless, these individuals were included in the PIRUS database of people charged with crimes relating to radical extremism. These new additions skew the data even further towards right wing extremism and could be used to justify enhanced targeting of these groups by counterterrorism agencies in the years to come.
The Manosphere as a Terror Threat
The Manosphere, another terror threat area identified by the McCain Institute, comprises a vast and diverse group of online creators and communities. While many of these communities are certainly misogynistic to varying degrees, only one very small subset has been associated with acts of extremist or terrorist violence - the incels. Short for involuntarily celibate, the incel movement is a small, fringe group of lonely and disenfranchised men. It’s hard to determine the exact size of this group due to its decentralized and widespread nature, but the portion responsible for violence is exceedingly small compared to other widely recognized terrorist groups. Furthermore, the broader Manosphere has very little to do with the radicalization of those few incels who do escalate to violence. In fact, the majority of incels that have committed a violent act have exhibited a lifetime of mental instability and feelings of violent misogyny, and most had no public association with online manosphere communities.
The conflation of incels with the broader manosphere as a justification for targeted counterterrorism operations, which sometimes constitute aggressive surveillance techniques and even censorship, represents a disturbing trend. Blaming the actions of a few on the beliefs of many was the modus operandi of the United States government during its failed War on Terror. Similarly, placing the blame for incel violence on the manosphere, the blame for white supremacy violence on right wing political views and so on, poses a very real threat to the constitutional rights of law-abiding US citizens.
In late 2023 a trove of DHS TVTP grant applications from 2020-2022 were released as the result of a Freedom of Information Act Request. These documents illustrate the expansive scope of DHS counter-terrorism operations, which encompass not only NGOs, but tech giants and social media platforms who have assisted the NGOs by designing tools to suppress problematic users and their communities. These tools use machine learning platforms reliant on subjective human inputs to create databases of keywords and individual users. The databases are then shared between platforms and organizations in order to harmonize the suppression of communities targeted by the TVTP grant program.
The Redirect Method
One of the tools used by Google to combat violent extremist groups online is called the Redirect Method. The Redirect Method was a joint effort between Google’s Jigsaw, a unit within Google that explores “threats to open societies”, which includes disinformation, and an NGO called Moonshot. The method was pioneered in 2016 as a tool to combat ISIS extremism. It has since been vastly expanded to encompass disinformation, conspiracy theories, and various other online harms that align with the scope of the US counterterrorism program.
From the Moonshot Website:
The Redirect Method places ads in the search results and social media feeds of users who are searching for pre-identified terms that we have associated with a particular online harm.
The Redirect Method can be extensively tailored to platform requirements and campaign goals, but at its core are three fundamental components: the indicators of risk (e.g. keywords); the adverts triggered by the indicators; and the content to which users are redirected to by the advertisements.
Our methodology recognizes that content not created for the purpose of counter-messaging has the capacity to undermine harmful narratives when curated, organized and targeted effectively. This approach also mitigates the risk of low retention rates experienced by bait-and-switch advertising, in which individuals are presented with content that differs significantly from that which they were searching for. Instead, the Redirect Method shows those users content which responds to and counters socially harmful narratives, arguments and beliefs espoused by the content for which they were originally searching.
To date, the Redirect Method has been deployed in partnership with tech companies, governments and grassroots organizations all over the world, in multiple languages and designed to counter a wide range of online harms.
Moonshot has partnered with a number of NGOs, funded by DHS, NIJ and other government agencies, to combat “hate and disinformation” on many major tech platforms, including Facebook, Instagram, Bing, YouTube, Google, and most recently, Twitter. Moonshot also publishes periodic Threat Bulletins to inform professionals on the trends of online domestic violent extremism. Its July 2023 Threat Bulletin references the tags “Anti-Government and Anti-Authority Extremism (AGAAVE)”, “conspiracy theories”, and “Violent dissident Republicans”.
Case Studies
Life After Hate and ExitUSA - Using Bad Data as a Foundation for Censorship
Life After Hate is an NGO/CVE founded by a group of former white nationalists, including Christian Picciolini and Frank Meeink, the man whose life inspired the movie American History X. LAH was central to efforts to steer Homeland Security’s TVTP program towards a focus on far-right extremism. In 2020, LAH teamed up with MoonshotCVE for a DHS-Funded program called ExitUSA. MoonshotCVE was partnered with Google's Jigsaw in the project. The goal of the project was to disengage extremists, in particular, far right extremists, from their respective movements and reintegrate them into society. From an article about the program: “Asked about ‘the Trump effect,’ Picciolini said the president’s election has emboldened the white supremacist movement. Calls to ExitUSA, a program through Life After Hate, have gone up from two or three per week before the election to 15-20 per week, he said."
The grant for the ExitUSA program is more heavily redacted than most others, and claims FOIA exemption b(6) liberally throughout. The target population of the study is vague - individuals with risk factors for violent WSE targeted violence and terrorism capabilities. The use of risk factors and behavioral indicators for targeting individuals has been debunked and decried against by critics of the TVTP program since its inception under President Obama. The objective of the messaging campaign was to utilize Moonshot's database of 26,000 unique indicators of far-right extremism to intercept users who were making certain Google searches, and use hyper-targeted interventions to redirect those people away from the page ranked results onto alternative pathways.
There are several other programs using this same method. Essentially how it works is this: If you are in the target population (as determined via the information in your Google user account) and you search something like “White Power”, you will get targeted ads that pop up at the top of the results. These ads are clickbait. The titles/thumbnails look like a video about white power, but when you click on them you are redirected to a counter-messaging program. So instead of a pro-white power video, you see a video where a former extremist tries to talk you out of being racist. There is very little evidence that these programs work. From the outside looking in, it seems unlikely that a white supremacist will get tricked into watching a 4 minute video and come away a changed person. There also isn't any good way to gauge whether or not the videos work to move people away from extremism. They rely solely on metrics like view time and click through rate as indicators of success.
A good example of this is the ICSVE YouTube page. ICSVE is a CVE run by Anne Speckhard. On their YouTube page, you can see examples of clickbait videos and thumbnails, which when presented to users as an ad on Google, make them think they are about to watch a racist video. Instead, the videos contain testimonies from “former” extremists attempting to provide counter-arguments against the targeted ideology. The use of such formers is a practice that has also received much criticism based on past experiences in which these individuals continued to promote their former ideologies and/or continued to manage their extremist organizations behind the scenes, all the while receiving paychecks from state-funded anti-hate NGOs.
The ICSVE’s IRS Form 990 indicates that Anne Speckhard earned a salary of $387,858 in 2021.
ICSVE 2021 Form 990
According to the LAH grant, Moonshot piloted their revolutionary new program internationally in 2015, targeting people interested in “far right slogans, merchandise, music, videos, and conspiracy theories and connected them with counter and alternative content”. LAH also states it will provide state heat maps as an output of the program. This tracks with what Moonshot did during their pilot program, as outlined in this RAND Corporation report. Moonshot identified ten state-level counties with the highest number of extremist searches per capita, then focused a month-long campaign on each of those states. What isn't disclosed is how Google obtained this information. Whereas other demographics, such as gender and age, are noted as being derived from a Google's user account, we are not told how Moonshot obtained the state county-level location of the search results.
The RAND document also acknowledges the challenges in measuring the effectiveness of such programs. For that, DARPA stepped in with a new method for analyzing YouTube comments on the redirected video by identifying the user account and then tracing back other comments made by that user to determine if their comment was more or less extreme afterwards. It is certain that DARPA uses this comment tracking technology for other purposes, as well.
Finally, the RAND document highlights the impact of the $750,000 DHS-funded ExitUSA campaign, the goal of which is stated, “To discredit far-right extremists and ‘sow seeds of doubt’ in their members.” The result? Eight (8) users reached out to ExitUSA to discuss deradicalization.
Life After Hate also used Google's Jigsaw for another campaign called WeCounterHate in an effort to reduce hate speech on Twitter by replying to tweets containing AI detected hate speech with a tweet explaining that for every retweet it generated, the organization would make a donation to Life After Hate. They claim their program resulted in 1 out of 5 countered hate tweets being deleted by authors and use this to make several hazy and unreliable projections regarding the programs efficiency. For example, the program claims 4 million fewer people were exposed to hate speech and that 20 million potential impressions of hate speech were prevented, and even go so far as to claim, without further clarification, that the program resulted in a “55% reduction of hate”. While this idea seems to have good intentions, the machine learning platform employed by the model gives cause for concern.
Screencap from promotional video.
WeCounterHate combined three technologies into their new AI platform: IBM Watson, Google Jigsaw, and Spredfast. The group claimed the platform was capable of classifying and rating the toxicity of hate speech on Twitter, but they used former extremists to train the AI to look for hidden forms of hate. Again, these programs consistently decline to reveal the words in their hate speech database, however, the program did give several examples of “hidden” hate speech and their meanings, including the terms “Pitbull – Used to describe black people as super-predators”, “The Juice – A play on The Jews”, and vague strings of emojis which WeCounterHate claims represent terms like “Hail Hitler”, “The Race War is coming”, and "Kill".
While WeCounterHate claimed that each tweet flagged was reviewed by a human moderator prior to being selected, the use of such a tool gives insight into how vaguely such programs have defined hate speech. For example, the term cuck is known to most people to be short for cuckold, however, WeCounterHate has redefined this word to mean “Inadequate man due to lack of racist and extremist views.” These incorrectly defined terms are inputted into a machine learning platform, powered by Moonshot, which can then be used by other tech companies, NGOs, and even the government to target individuals online.
Screencap from promotional video.
Ironically, the promotional video on the WeCounterHate website includes the full, hateful tweets, uncensored, to demonstrate the effectiveness of their program in reducing the public’s exposure to such content (see screencap below, which I have chosen to censor). These missteps reinforce the danger of using “former” extremists in CVE programs. Not only did the former extremists train their AI using dog whistles which were vague, incorrect, and possibly non-existent, but they used their website to disseminate the very same hate speech that they were given hundreds of thousands of dollars to eliminate from the web.
Screencap from promotional video.
Diverting Hate Program Targets the Manosphere
Diverting Hate is an NGO conducting TVTP work at the Middlebury Institute of International Studies. The program began as a part of the McCain Institute Invent2Prevent program, which is a competition among high schools and universities to develop tools and programs in line with DHS’s TVTP goals. Top ranking competitors may go on to expand their programs by applying for the DHS TVTP grant. The first-place collegiate winner in 2021 was a team from Middlebury College that created a program called “Diverting Hate”, where populations deemed at-risk for violence are located online, intercepted, and redirected to counter-messaging platforms. The team relied heavily on previous uses of the Redirect Method and on Moonshot’s databases of extremist-related indicators. The at-risk population at the focus of this program was the incel community.
The team was led by Jason Blazakis, Dir. of the Center on Terrorism, Extremism, and Counterterrorism (CTEC). Blazakis worked for the US Department of State until 2018. His role was to designate countries, organizations, and individuals as terrorists. He is listed as a Democratic Party candidate for the 2024 NJ Congressional election. Notably, Blazakis is a professor at the University of Maryland and has been involved in several START programs and conferences. Blazakis is currently instructing a course in terrorism studies for the START 2023 winter program.
In a recent Washington Post op-ed, Blazakis makes his motivations for influencing counterterrorism policy very clear. He argues for:
1) expanding the funding and authority of the FBI to work on domestic extremism cases;
2) updating the US domestic terrorism statute to “pave the way for longer prison sentences and provide clearer pathways to prosecution of accomplices” while specifically referencing “Jan. 6 insurrectionists”;
3) US government intervention in the policies of social platforms;
4) encouraging and funding NGOs to identify at-risk populations and individuals, noting that federal authorities “are not viewed as honest brokers in this fight.”
On point 4, it’s easy to see how this could be interpreted as an argument for discretely expanding surveillance, monitoring, and data collection capabilities of the US by routing said services through NGOs who then prepare tools and reports for government agencies. This work-around provides plausible deniability for government agencies looking to circumvent constitutional protections but does little to protect US citizens from aggressive surveillance.
Invent2Prevent Collegiate Finals
The Diverting Hate program, which was an entry for the DHS/McCain Institute Invent2Prevent competition, had two primary objectives. The first objective was to develop “alternative pathways'' for incels on Twitter, such that they could be intercepted and diverted towards curated counter-messaging platforms – creators, podcasts, and websites designed to have the appearance of the same general content, but which oppose the “extremist” viewpoint, “debunk” misinformation, etc. To accomplish this, the team first focused on researching incels, conducting Social Network Analyses, and collecting data on Twitter. They then used paid advertisements on Twitter to target these audiences with a specific goal of diverting them away from “dangerous paths'' to their hand-selected counter-messaging collaborators.
The second objective was developing a native tool to be used on Twitter which flags a tweet as sensitive content or as potentially violating the rules of the platform. The viewer is then given an option to click onto an alternative link, which would lead the user to a counter-messaging platform. The researchers used Shadaya Knight, a large Twitter user in the Manosphere, to illustrate the process. The tweet in their example, which they flagged as extremist content, stated, “The sheep think the red pill is about women. It’s much more than that, women are a mere fraction. It’s about unlearning everything - politics, religion, economics, sciences, entertainment. True red pillars, see the world for what it really is.” Nothing in Shadaya’s tweet violated Twitter’s TOS, in fact, it’s hard to see how anyone would consider this statement to be an example of extremism.
Stills from the Invent2Prevent Collegiate Finals YouTube video
For the Invent2Prevent collegiate program, Diverting Hate only sought to test the feasibility of what they intended to eventually become a native Twitter tool. By using Twitter ads and targeting incel-related keywords, they successfully diverted over 6,000 users away from incel and incel-adjacent spaces to their website.
During the Invent2Prevent Collegiate Finals in the Fall of 2021, the team presented their program to a panel of judges representing a variety of national security interests. Diverting Hate team member Myles Flores, a Graduate Researcher with the CTEC, acknowledged that the team developed their database of incel-related keywords primarily using Incel Wiki, a site owned and operated by Lamarcus Small, the owner of the most extreme incel forum. The Incel Wiki is user generated, meaning that all one needs to do is join their Discord server to be granted editing permissions on the site. There are strict rules on how the group may be portrayed. The rules also state that articles about “memes or obscure-theories do not need to be descriptive or contain citations… Whether a meme or obscure theory article stays is not depended on its accuracy but rather on its entertainment value and the quality of the content. In other words, the entries on Incel Wiki can simply be made up for entertainment purposes by a user with posting permissions.
Former Incel Wiki administrator William Lupinacci stated that he left the group because Small would not allow him to move the group in a direction away from “The Blackpill” and would reject content that disputed his own ideological preference. Lupinacci also claimed that a sinister pedophile named Nathan Larson was a frequent contributor to the Wiki. Small refused to ban Larson from his sites, despite his perverse contributions which promoted pedophilia as a solution to inceldom. Larson himself was not an incel, rather, he hated incels and sought to stigmatize and even sterilize them for his own sick agenda. Larson was accused of using one of his incel-adjacent websites to groom and kidnap a 12 year old girl. He was arrested and later died in prison while awaiting trial.
Despite these factors, the Diverting Hate team relied heavily on Incel Wiki not only to develop their list of “dangerous words”, but also to frame their understanding of these words and terms commonly associated with incels. This represents a serious flaw in the researchers methods and raises concerns that the keywords used to target creators in the manosphere came from a faulty dataset. For example, Judge Erin Saltman from the Global Internet Forum to Counter Terrorism (GIFCT) asked team members to explain how the 1-5 “violation scale” was used to perform keyword analysis, to which Kaitlyn Tierney responded that the word “blackpill” was very dangerous and ranked a 5. She stated that such “very dangerous” keywords may require quarantining and diversion by Twitter, however, their reliance on the Incel Wiki to obtain the keywords and ascertain their meaning calls into question their authority on this subject.
During the judge interviews, Saltman said, “I will openly admit a little bit of bias, because I actually… when I worked at Facebook helped launch some of their Redirect initiatives in about 5 different countries and know Moonshot very well.” She tells the contestants that they are the first to launch the program on Twitter, and that similar programs exist on “Facebook, Instagram, Bing, YouTube, Google.” She also says this is the first time such a program has been used to combat misogynist-based violence, and that previous iterations focused on white supremacy, Q-Anon and Islam-based extremism. She then asked about the scalability of the program, to which team member Peter Stewart responded that the team was especially excited about expanding the program to encompass the greater manosphere. Their intent was to team up with Twitter to implement their tool and then move it to other platforms. He stated that the group intended to build databases for other “extremist groups” and expand the program to target the broader manosphere.
DHS TVTP Grant Award
After winning first-place in the Invent2Prevent competition, Diverting Hate went on to apply for the 2022 DHS TVTP grant and were successfully awarded $659,327 for continuing the development of their diversion tool. The funding went directly to Arizona State University, home of the McCain Institute. Whereas their pilot program reached an audience of just over 6000 Twitter users, the 2022 grant put their new goal at 700,000 users. Brette Steele, the Senior Director for Preventing Targeted Violence at the McCain Institute, was listed as the Principal Investigator in the grant.
Diverting Hate’s DHS TVTP grant application requested $659,327 from the federal government for their proposed project, which began 10/01/2022 and is projected to end on 09/30/2024.
From their abstract: Violent misogyny is amplified by social media algorithms designed for engagement. This results in radicalization of men and violence against women. Men must be diverted away from these narratives and toward protective factors to prevent gender-based violence.
The program is run by a team of students at the Middlebury Institute of International Studies, in collaboration with Arizona State University. The grant application outlines two objectives of the program:
Develop and contribute to the theoretical understanding of Incel ideology by conducting and sharing the results of in-depth research, practical analyses, and marketing tests to practitioners, researchers, academics, and the broader preventing targeted violence and terrorism community.
Disrupt Incel radicalization on-ramps within Twitter by surfacing alternative pathways to community group partners using targeted ads via user behavior, key terms, and network analysis.
Objective 1 is quite vague, and appears to be focused on data collection and possibly controlled studies/surveys directed at the incel community. Diverting Hate defines Objective 2 as “an evidence-based program that addresses the online/digital spaces through our strategic targeting to divert men away from dangerous paths within the Twitter ecosystem and towards protective factors.” In the broader context of the program, it appears the term “dangerous” is meant to refer to creators in the Manosphere.
The Needs Assessment section of the grant application states that other DHS-funded organizations such as CCDH, Life After Hate, and Jigsaw/Moonshot have all worked to influence social media policies in this way, and that many of those methods relied on censorship.
The Diverting Hate team explicitly states that they wish to utilize US counterterrorism funding in the amount of $659,327 to intercept and redirect 700,000 Twitter users away from the manosphere. They deceptively frame this goal as a method for preventing the radicalization of men towards incel ideology, however, by targeting the manosphere they are expanding the tool beyond the incels to a much broader segment of American society, the vast majority of which are not at-risk for violent extremism.
In 2023, the team released two biannual reports outlining Diverting Hate’s progress. Diverting Hate’s reports specifically call out creators Rollo Tomassi and Andrew Tate as targets for their program, stating the goal to “divert people (incel and incel-adjacent) away from misogynistic content online.”
The reports go on to name a number of large and small creators within the manosphere, arguing that these creators cause harm to their communities and directly lead men down a dangerous radicalization pipeline to violent misogynistic extremism. There is an entire section devoted to Pearl Davis, a female creator who is heavily criticized and relatively non-influential in the space. The report also has a section dedicated to criticizing Elon Musk’s enforcement of moderation of X and his reinstatement of “bad actors”, including Andrew Tate, Donald Trump, Kanye West, and the Babylon Bee.
Several of the named individuals exist completely outside the manosphere. For example, Sameera Khan is an influencer who focuses on geopolitics. She was included in the report because she had once made a tweet in support of something Andrew Tate said. Others have small communities with good intentions, for example, Honey Badger Radio, a manosphere adjacent podcast which also curates a men’s support community with the goal of inspiring them to achieve their goals. In this case, Karen Straughan of HBR was criticized for “platforming hate” due to her interview with a prominent manosphere figure.
Definition of “divert” from the group’s website.
Despite basing their entire program around the assumption that the manosphere is the source of incel radicalization to violence, only one section of the report discusses a case study of US-based misogynistic violence at length, that of Mauricio Garcia, perpetrator of the mass shooting in Allen, Texas in May 2023. In discussion about Garcia, the author does not acknowledge his obsession with and sympathy for mass shooters, combined with callous disregard for the victims. Garcia copied and pasted posts from the aforementioned incel forum into his online journal. In particular, he copied a post authored by one of the main staff members of that forum, which mocked the parents of the Parkland School shooting victims and attempted to justify the actions of Nikolas Cruz. And yet, the author baselessly attributed Garcia’s radicalization to the Manosphere without providing any direct examples of individuals in the space who may have played a role.
The bi-annual report from April 2023 contains scathing criticisms of Elon Musk, alleging that Musk’s mass layoffs resulted in the restriction of moderation tools, making them unable to discipline accounts that violate the Hateful Conduct and Misinformation Policy. An adjacent section titled, “Misogynistic Language is Bountiful on Twitter” lists several keywords that the team identified as being some of the more concerning language within the Twitter manosphere. These terms include 666 rule, The Wall, and Hypergamy. While it is possible that Elon Musk's acquisition of Twitter threw a wrench in the Diverting Hate campaign, the fact that DHS paid an NGO nearly $700,000 is alarming enough to warrant a direct response from Musk on the subject.
The report also makes many subjective and nonfactual claims about the incel movement, in general, and references a biased study on the world’s largest incel forum co-authored by the owner of that forum under the pseudonym Alexander Ash. As one might expect, this study whitewashes the harsh reality of the forum and attempts to portray its members as lonely, misunderstood men who are mostly non-violent. The 2021 study, authored by Anne Speckhard of the NGO ICSVE, was later published in Homeland Security Today magazine but was removed from the site following major backlash accompanying the release of a New York Times expose on the forum and its sadistic owners. The use of this as a primary source could impart further bias on the keyword targeting implemented by the group.
The second part of the report, published in September 2023, is much more broad in scope and makes a number of unsupported claims and associated. Notably, on page 4 of the report, there is a diagram that contains screencaps of the Twitter accounts of specific individuals within the manosphere, along with an arrow to demonstrate the diversion of Twitter users away from that group and towards podcasts and influencers curated by the program.
From the Diverting Hate September Bi-Annual Report
The Diverting Hate Counter-Messaging Partners
The Diverting Hate team claims they divert would-be incels to “resource hubs” where they can access mental health resources and support groups, however, many of their counter-messaging collaborators are for-profit companies whose websites advertise books, programs, and even “training camps” which cost hundreds of dollars. For example, the organization Man Enough, operated by Hollywood actor Justin Baldoni, serves as a way to sell his books and seminars. Similarly, The ManKind Project sells “New Warrior Training Adventures” that cost nearly $1000 to attend. Another collaborator called Men Alive, run by a marriage counselor named Jed Diamond, also sells pricey relationship therapy programs and “Healing Services”. It is difficult to understand how any of these resources would be useful to incels - dejected and angry young men who tend to be low-income.
Who is Really Radicalizing the Incels?
Contrary to the radicalization pathways proposed by the Diverting Hate team, the main funnel from Twitter to the world’s largest incel forum, incels.is, is far more likely to originate at @IncelsCo, the Twitter account operated by the site owner, Lamarcus Small. The URL to the website is included in IncelsCo’s Twitter name and in his banner picture. Small’s IncelsCo Twitter account boasts 14.3K followers. In addition, Small runs an “Incel Talk” community on Twitter with 1.5K members. Lamarcus Small, whose username on his forum is “Master”, has publicly stated that he is engaging with specific communities on Twitter in order to build his following there. His goal, obviously, is to drive more traffic to his website.
Incels.is has been mostly deranked from Google and other search engines over the past year, owing in large part to country-imposed bans on Sanctioned Suicide and related sites. A CCDH study showed that YouTube shares the highest crossover with incels.is with 14,226 links to content in the analyzed dataset, as compared to 1,149 for Twitter.
The CCDH report also lists the top 10 YouTube channels with crossover to incels.is, most of which are incel specific channels, and none of which have a direct link to the broader manosphere.
Incel forum posts pertaining to the @IncelsCo website.
Given this information, it is surprising that Diverting Hate researchers do not mention this Twitter account or make any other mention of the methods employed by Lamarcus Small to advertise his websites on social media. By quote-tweeting and commenting on larger manosphere accounts and engaging in discourse relevant to that community, Small has been able to successfully funnel manosphere audiences to his page. And despite the manosphere accounts not reciprocating in this exchange, the Diverting Hate team has enacted punitive measures against them, blaming them for being at the beginning of the radicalization pipeline. In this way, they have given Lamarcus Small the “poison dagger” – Any creator he chooses to engage with repeatedly may be penalized for this, regardless of whether or not that relationship is mutual.
Rollo Tomassi, who operates @RationalMale Twitter account, has engaged with @IncelsCo on only 5 occasions in the past 4 years. Of those 5 engagements, only one was an organic engagement initiated by Tomassi, and that engagement occurred 11/20/2023, well after the date that the Diverting Hate program wrapped up. At the time that the Diverting Hate campaign took place, Tomassi had only one engagement with @IncelsCo on Twitter. None of the other large creators mentioned in the report have any existing interactions with @IncelsCo on Twitter, aside from a few one-way interactions from @IncelsCo retweeting and commenting on posts with no reciprocity.
Adding another layer of irony is the fact that several large accounts in the NGO researcher space engage with @IncelsCo on a regular basis. These accounts have far more interactions with Small’s account than any of the manosphere creators targeted by Diverting Hate’s program. For example, Alexander @datepsych on Twitter, a manosphere adjacent account which frequently engages back and forth with @IncelsCo, is part of a network of individuals promoting the interests of NGOs performing studies on incels. The NGOs, which are connected to Swansea University and the University of Texas at Austin, obtain the majority of their study participants from the incels.is forum. Alexander @datepsych engaged reciprocally with @IncelsCo on Twitter 9 times in a one month span (November, 2023). Unlike the manosphere creators targeted by Diverting Hate, not only has @DatePsych engaged in significant discourse with @IncelsCo, but some of that engagement has been positive rather than oppositional. He has also participated in Twitter Spaces with the incel community.
Alexander @DatePsych promotes the research of William Costello and Buss Labs frequently, and was part of an “affiliate marketing” strategy used by the researchers to increase study participation by requesting influencers in the manosphere space to promote them.
https://twitter.com/datepsych/status/1643579951147372547
The study promoted in this tweet by @datepsych is a Swansea-Texas study which received funding from a counterterrorism agency in the UK.
Chris Williamson, host of the Modern Wisdom Podcast, boasts 1.54M subscribers on YouTube and was also used to recruit participants for the study, yielding over 60K impressions on Twitter.
The lead researcher involved in the Swansea study, William Costello (@CostelloWilliam) also has a large number of 2-way engagements with @IncelsCo on Twitter, and all of these interactions are friendly. Andrew Thomas (@DrThomasAG), the other researcher in charge of this study, made a (now deleted) post thanking @IncelsCo after study recruitment was complete.
@notsoerudite, a female content creator with 62K subscribers between Twitch and YouTube, has also centered a large portion of her content around incels and promoted William Costello and his research when doing so. She has shared the results of these studies on large panels which has enabled her to reach an extended audience not ordinarily associated with the manosphere. The arguments made by @notsoerudite, based around William Costello’s research, reflect those made by Jesse Morton, who was a close colleague of Costello prior to his death. She argues that incels are simply misunderstood and are unproblematic, and that they represent a precarious social group that is marginalized and left behind.
Once again, this is in stark contrast to other manosphere creators cited by the Diverting Hate researchers. Rollo Tomassi (@RationalMale), an influencer who does not directly engage with @IncelsCo, has consistently used the word incel as an insult and attempted to dissuade men from falling victim to this mentality. Given this information, it appears that the manosphere and incel-adjacent Twitter accounts most responsible for sending incels down the radicalization pipeline, which leads to Lamarcus Small’s incel forum, are those involved in state-funded NGO research.
Diverting Hate Retracts Public Access to Reports
On December 8, 2023, a Twitter post about Diverting Hate’s program got several hundred retweets and a journalist attempted to reach out to them for comment. Diverting Hate did not respond to the journalist's request. The next day, Diverting Hate removed the “Research and News” and “Partners” sections from their website, divertinghate.org, and restricted access to their reports, which were previously available to the public via a Google Drive link. The reports were downloaded prior to being removed and can be accessed here:
https://drive.google.com/drive/folders/1svDO5K6BBb1j__DScOfHv0-FAXzVHuG2
Thanks for the excellent work. This Youtube video shows an insider at Diverting Hate blowing the whistle on them.
https://www.youtube.com/watch?v=Fa_X1-IUvJ0