
Weekly Axis Of Easy #393
Last Week’s Quote was: “I am so clever that sometimes I don’t understand a single word of what I am saying, ” was by Oscar Wilde. Ross got it right!
This Week’s Quote: “We are always getting what we believe, but not always what we want. “By ???
THE RULES: No searching up the answer, must be posted at the bottom of the blog post, in the comments section.
The Prize: First person to post the correct answer gets their next domain or hosting renewal on us.
This is your easyDNS #AxisOfEasy Briefing for the week of March 24th 2025 our Technology Correspondent Joann L Barnes and easyCEO Mark E. Jeftovic send out a short briefing on the state of the ‘net and how it affects your business, security and privacy.
To Listen/watch this podcast edition with commentary and insight from Joey and Len the Lengend click here.
In this issue:
- Leaked Docs Tie USAID, GEC, NewsGuard, and Poynter to AI-Driven Speech Censorship Network
- Italy Orders Google to Block Pirate Streams via DNS Poisoning in Landmark Piracy Shield Ruling
- UK’s Online Safety Act Forces Forum Shutdowns, Spurs Digital Isolation Fears
- China’s AI Censorship Engine: Leaked Dataset Reveals How LLMs Are Policing Dissent
- Canada Pressures Social Media Platforms Ahead of 2025 Federal Election
- Former CRTC Vice-Chair: Canadian media have a direct financial stake in the election outcome (does it show?)
- Are We Already Living in A Post-Singularity World?
Elsewhere Online:
Leaked Docs Tie USAID, GEC, NewsGuard, and Poynter to AI-Driven Speech Censorship Network
America First Legal (AFL) released documents from its litigation against the State Department’s Global Engagement Center (GEC), exposing a government-backed censorship network spanning USAID, the British Foreign, Commonwealth & Development Office (FCDO), NewsGuard, Poynter, and Park Advisors. Though GEC’s stated purpose was to combat foreign disinformation, internal communications reveal it collaborated with USAID to counter “COVID-19 misinformation” and political narratives, including Moldova’s 2020 election. USAID branches involved included TF 2020-COVID 19, Digital Development, Asia Bureau ES Taskers, Asia Outreach, CPS Policy, and CPS Africa.
The documents show that Park Advisors, run by Obama-era State Department alum Christina Nemr, received over $6 million in GEC funds and funneled subawards to NewsGuard, the Atlantic Council, and the Soros-funded Global Disinformation Index. These groups, unencumbered by GEC’s international restrictions, helped build “Disinfo Cloud,” used by U.S., UK, EU, Australian, and Estonian governments. NewsGuard’s Matt Skibinski pitched services days after the 2020 U.S. election and shared samples of the AI-driven “Misinformation Fingerprints” pilot with U.S. Cyber Command. GEC also shared internal evaluation tools with Poynter, which AFL links to a coordinated global fact-checking network. On January 8, 2021, alleged “malinformation” from the U.S. State Department was shared with UK officials. GEC shut down in December 2024.
Italy Orders Google to Block Pirate Streams via DNS Poisoning in Landmark Piracy Shield Ruling
Italy’s AGCOM, the national communications regulator, scored a legal win against Google, with the Court of Milan ordering the company to “poison” its public DNS servers to block domains linked to illegal Serie A football streaming. This ruling, issued inaudita altera parte—without hearing Google’s side—was prompted by Google’s repeated failure to comply with Piracy Shield, a law mandating site blocks within 30 minutes of AGCOM identification. Since Google’s public DNS facilitates access to pirate sites, the court held it liable under the law. The technique, DNS poisoning, rewrites records to misdirect users—a legally sanctioned version of what’s usually a cyberattack tactic.
AGCOM Commissioner Massimiliano Capitanio praised the decision on LinkedIn, framing it as validation of AGCOM’s uniquely aggressive copyright enforcement system. Capitanio criticized Google’s history of ignoring block lists and emphasized the court’s endorsement of AGCOM’s investigative legitimacy. This follows a January ruling against Cloudflare, whose CDN, DNS, and WARP VPN were also found to aid piracy; the firm faced €10,000-per-day fines for non-compliance. Italian ISPs, too, have been targets—last year, they accidentally blocked all of Google Drive due to Piracy Shield’s domain-wide filtering.
TorrentFreak, which spotted the ruling, has reported extensively on these enforcement trends. Google has been contacted for comment; none yet received.
UK’s Online Safety Act Forces Forum Shutdowns, Spurs Digital Isolation Fears
The UK’s Online Safety Act, in force since March 17, 2025, mandates that all online platforms—regardless of size—implement immediate measures to protect users from illegal content, with enforcement led by Ofcom. Platforms must submit written risk assessments detailing the likelihood and severity of harms like coercive behavior or cyberstalking by March 31. Noncompliance can trigger fines up to £18 million or 10% of turnover, or UK site blocks. The law’s scope includes even “small but risky services,” causing closures of niche, long-standing forums—like LFGSS, The Hamster Forum, and Dads With Kids Forum—which cannot meet compliance demands. Developer Dee Kitchen deleted all 300 Microcosm-hosted communities on March 16. Foreign-hosted sites like Lemmy.zip (Finland) are now blocking UK users to avoid exposure.
Critics including Lord Daniel Moylan and Professor Andrew Tettenborn (Free Speech Union adviser) warn the law isolates UK users and drives them to dodgy offshore sites via VPNs. Ofcom insists it will act proportionately but prioritizes high-risk platforms. Comparisons arise with China’s internet firewall. The Epoch Times reports extensively, quoting stakeholders. Norman Lewis, of MCC Brussels and formerly PwC and Orange UK, warns the Act surpasses the EU’s Digital Services Act and threatens platforms lacking massive ad revenue. Missing: church porches, where Moylan quips noncompliant forums may now post updates.
Read: https://www.zerohedge.com/political/british-chat-forums-shutter-avoid-new-internet-policing-law
China’s AI Censorship Engine: Leaked Dataset Reveals How LLMs Are Policing Dissent
China is using large language models to scale censorship. A leaked dataset—133,000 entries, discovered by security researcher NetAskari on an unsecured Elasticsearch database hosted on a Baidu server—reveals a state-aligned system that flags politically sensitive online content. The database, seen by TechCrunch and last updated in December 2024, includes flagged posts about rural poverty, corrupt Communist Party officials, and local police shaking down entrepreneurs. One item criticizes depopulated rural towns; another describes a CCP member expelled for corruption and belief in “superstitions.” Content referencing Taiwan (appearing 15,000+ times), military movements, political satire, and even veiled criticism—like a Chinese idiom about fallen power—is prioritized for suppression.
The LLM, prompted in natural language, evaluates whether content involves sensitive topics—politics, society, military—and flags it for “public opinion work,” a euphemism for censorship and propaganda, overseen by the Cyberspace Administration of China. Experts like Xiao Qiang (UC Berkeley) and Michael Caster (Article 19) say the system is optimized to detect both overt and subtle dissent. This reflects broader trends: OpenAI recently reported Chinese actors using generative AI to track protest-related posts and discredit dissidents like Cai Xia. While traditional censorship blocks keywords like “Tiananmen massacre,” AI enables repression with greater speed, scale, and nuance—turning the internet into a front line of authoritarian control.
Read: https://techcrunch.com/2025/03/26/leaked-data-exposes-a-chinese-ai-censorship-machine/
Canada Pressures Social Media Platforms Ahead of 2025 Federal Election
As Canada heads toward its April 28, 2025 federal election, Elections Canada is pressuring platforms like TikTok, X, and Meta to intensify content moderation. The agency has been “in touch” with these firms, urging stricter oversight of what it and the 2023 Foreign Interference Commission frame as “information manipulation”—now labeled an “existential threat” to democracy. TikTok responded by launching an in-app Election Center in collaboration with Elections Canada, offering “media literacy and authoritative information” and directing users to official resources via prompts triggered by election-related content. The company also pledged to remove “harmful misinformation” and label unverifiable content using fact-checkers.
Meta, in a blog post, committed to informing users in Canada when, where, and how to vote, deploying “advanced security operations,” and enhancing transparency around political and social ads. It will label both “realistic” AI-generated and “some other types of organic content,” though it left vague whether third-party fact-checkers will be involved. Chief Electoral Officer Stéphane Perrault expressed satisfaction with current platform cooperation.
Also visible in the background is Elections Canada’s mobile-friendly Federal Election Centre interface, with multilingual options, a countdown (36 days at screenshot time), and voter requirements: Canadian citizenship and age 18+ on election day. TikTok’s press release and Meta’s blog serve as early disclosures.
Read: https://reclaimthenet.org/elections-canada-social-media-censorship-2025
Former CRTC Vice-Chair: Canadian media have a direct financial stake in the election outcome (does it show?)
You may have noticed that most of the mainstream media in Canada isn’t disguising their support for the Mark Carney led Liberal Party in the forthcoming election.
“This is the first election in which media have a direct financial stake in the outcome: does it show?”
That’s because, according to former CTRC Commissioner and Vice-Chair Peter Menzies:
“For the first time in the nation’s history, almost all the media covering an election have a direct, even existential, stake in the outcome.”
The shorthand for what’s at stake is the theme “Defund The CBC”, which arose as a backlash to overt bias in Canada’s media: since taxpayer money directly funds state-owned media, and heavily subsidizes the rest – there is a perception that it has come to resemble Soviet-Era agitprop more than it does objective news (to be clear, this is my shorthand for it, not Menzies).
But as Menzies reports: it’s more nuanced than that (and I may add again, that the MSM doesn’t seem to do nuance lately, either):
“When asked specifically about the future of the Local Journalism Initiative (LJI) program which funds journalists who cover politics and other civic issues at the community level, Poilievre said the Conservatives actually plan to implement an alternative which would give a better and stronger voice to community-based sources of news. He said further details will be revealed as the Conservative election platform is released.”
That means two things. One is that the election won’t be about whether news organizations get government support or not—it will be about how they get it.
It is also interesting to note that Menzies editorial was published by TheHub.ca – one of the few Canadian independent news organizations that refused government funding on principle (another is Blacklocks.ca whose coverage daily news roundup is absolutely excellent.)
Read the Hub piece here: https://thehub.ca/2025/03/25/peter-menzies-canadas-formerly-free-press-get-set-to-cover-an-election/
And Peter Menzies Substack here: https://substack.com/@petermenzies/p-156618555
Are We Already Living in A Post-Singularity World?
Recently I put our a couple of articles that talked about the phenomenon of “Future Shock” – from the Alvin & Heidi Toffler series of books that were possibly the first to pick up on the nature of accelerating technological change.
In the first piece, Frazzledrip Overdrive, I almost off-handedly remarked that we are already living in a post-singularity world, and I drilled down on that concept more in The Singularity Has Already Happened.
The short reason why I say that, is because since the AI explosion that GPT 3 set off a couple years ago, we are now at the point where AI generated sofware is itself coding and generating more software.
Said differently “The code is beginning to code.”
At the time that article dropped, I appeard on War Room to discuss, and Joe Allen came on to tell me I was crazy and we were not anywhere near a “singularity” – yet.
We decided to get on a longer podcast to discuss:
Watch: https://bombthrower.com/podcast/bttv-117-joe-allen-has-the-singularity-already-happened/
Elsewhere Online:
Conflicting Reports Emerge Regarding Possible Oracle Cloud Security Incident
Read: https://www.securityweek.com/security-firms-say-evidence-seems-to-confirm-oracle-cloud-hack/
Panic Over Data Privacy as 23andMe Faces Court-Supervised Sale
Read: https://www.zerohedge.com/technology/23andme-customers-panic-delete-genetic-data
Critical Next.js Middleware Vulnerability Allows Authorization Bypass
Read: https://hackread.com/next-js-middleware-flaw-bypass-authorization/
iMessage and RCS Targeted by New Effective Phishing Campaigns
Read: https://www.darkreading.com/threat-intelligence/lucid-phishing-exploits-imessage-android-rcs
Chinese Hackers Deploy ShadowPad and Updated SparrowDoor in Recent Attacks
Read: https://thehackernews.com/2025/03/new-sparrowdoor-backdoor-variants-found.html
If you missed the previous issues, they can be read online here:
-
-
-
-
-
-
-
-
-
-
- March 21st, 2025: AI Jailbreak Exposes Critical Flaws: Researchers Use Chatbots To Generate Malware With No Coding Experience
- March 14th, 2025: PowerSchool Data Breach Exposes Millions Of Students But Hides Key Details
- March 7th, 2025: CTA Proposal Could Fine Airline Passengers For Publicly Discussing Complaint Resolution
- February 28th, 2025: What Did You Get Done Last Week?
- February 21st, 2025: Russian Hackers Exploit Signal’s Device-Linking Feature To Spy On Military And Civilian Communications
-
-
-
-
-
-
-
-
-