<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	
	>
<channel>
	<title>
	Comments on: Why the &#8220;AI is just a tool&#8221; narrative is dangerously wrong	</title>
	<atom:link href="https://axisofeasy.com/ai-identity-autonomy/anthropic-co-founders-warning-ai-isnt-just-a-tool-anymore/feed/" rel="self" type="application/rss+xml" />
	<link>https://axisofeasy.com/ai-identity-autonomy/anthropic-co-founders-warning-ai-isnt-just-a-tool-anymore/?pk_campaign=feed&#038;pk_kwd=anthropic-co-founders-warning-ai-isnt-just-a-tool-anymore&#038;utm_source=rss&#038;utm_medium=rss&#038;utm_campaign=anthropic-co-founders-warning-ai-isnt-just-a-tool-anymore</link>
	<description>Rapid Coverage of a World Gone Full Cyberpunk</description>
	<lastBuildDate>Sun, 19 Oct 2025 20:05:53 +0000</lastBuildDate>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.9.4</generator>
	<item>
		<title>
		By: JonW		</title>
		<link>https://axisofeasy.com/ai-identity-autonomy/anthropic-co-founders-warning-ai-isnt-just-a-tool-anymore/?pk_campaign=feed&#038;pk_kwd=anthropic-co-founders-warning-ai-isnt-just-a-tool-anymore/#comment-177885</link>

		<dc:creator><![CDATA[JonW]]></dc:creator>
		<pubDate>Sun, 19 Oct 2025 20:05:53 +0000</pubDate>
		<guid isPermaLink="false">https://axisofeasy.com/?p=32446#comment-177885</guid>

					<description><![CDATA[Part of the planning should involve publicizing and enforcing serious punishments for the HUMANS that allow AI (or any automation) to take fully autonomous control of potentially dangerous physical systems.  Skynet was only a problem when it was given control of the missiles.  Another key element is requiring segmentation and firewalls for personal and operational information and actions.  Even humans should not have unfettered access to massive databases on people and systems, there is too much risk they will eventually be misused incrementally or catastrophically -- and that goes for intelligence agencies, too!]]></description>
			<content:encoded><![CDATA[<p>Part of the planning should involve publicizing and enforcing serious punishments for the HUMANS that allow AI (or any automation) to take fully autonomous control of potentially dangerous physical systems.  Skynet was only a problem when it was given control of the missiles.  Another key element is requiring segmentation and firewalls for personal and operational information and actions.  Even humans should not have unfettered access to massive databases on people and systems, there is too much risk they will eventually be misused incrementally or catastrophically &#8212; and that goes for intelligence agencies, too!</p>
]]></content:encoded>
		
			</item>
	</channel>
</rss>
