<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:googleplay="http://www.google.com/schemas/play-podcasts/1.0"><channel><title><![CDATA[The Digital Minds Newsletter]]></title><description><![CDATA[The Digital Minds Newsletter collates the latest news and research on Artificial Intelligence, Consciousness and moral status.]]></description><link>https://www.digitalminds.news</link><generator>Substack</generator><lastBuildDate>Mon, 20 Apr 2026 01:14:11 GMT</lastBuildDate><atom:link href="https://www.digitalminds.news/feed" rel="self" type="application/rss+xml"/><language><![CDATA[en]]></language><webMaster><![CDATA[digitalminds@substack.com]]></webMaster><itunes:owner><itunes:email><![CDATA[digitalminds@substack.com]]></itunes:email><itunes:name><![CDATA[Lucius Caviola]]></itunes:name></itunes:owner><itunes:author><![CDATA[Lucius Caviola]]></itunes:author><googleplay:owner><![CDATA[digitalminds@substack.com]]></googleplay:owner><googleplay:email><![CDATA[digitalminds@substack.com]]></googleplay:email><googleplay:author><![CDATA[Lucius Caviola]]></googleplay:author><itunes:block><![CDATA[Yes]]></itunes:block><item><title><![CDATA[The Vatican, AI Legal Personhood, and Claude’s Constitution]]></title><description><![CDATA[Digital Minds Newsletter #2]]></description><link>https://www.digitalminds.news/p/the-vatican-ai-legal-personhood-and</link><guid isPermaLink="false">https://www.digitalminds.news/p/the-vatican-ai-legal-personhood-and</guid><dc:creator><![CDATA[Will Millership]]></dc:creator><pubDate>Tue, 10 Mar 2026 11:53:22 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/64a1fe0a-d289-4b17-8317-f5922cae8d4d_1024x576.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Welcome back to the Digital Minds Newsletter, your curated guide to the latest developments in AI consciousness, digital minds, and AI moral status.</p><p>If you enjoy this newsletter, please consider sharing it with others who might find it valuable, and send any suggestions or corrections to <a href="mailto:digitalminds@substack.com">digitalminds@substack.com</a>.</p><p>&#8211; <a href="https://www.linkedin.com/in/will-millership-98393b58/">Will</a>, <a href="https://luciuscaviola.com/">Lucius</a>, and <a href="https://meditationsondigitalminds.substack.com/">Bradford</a></p><p>In this issue:</p><ol><li><p><a href="https://www.digitalminds.news/i/190403526/1-highlights">Highlights</a></p></li><li><p><a href="https://www.digitalminds.news/i/190403526/2-field-developments">Field Developments</a></p></li><li><p><a href="https://www.digitalminds.news/i/190403526/3-opportunities">Opportunities</a></p></li><li><p><a href="https://www.digitalminds.news/i/190403526/4-selected-reading-watching-and-listening">Selected Reading, Watching, and Listening</a></p></li><li><p><a href="https://www.digitalminds.news/i/190403526/5-press-and-public-discourse">Press and Public Discourse</a></p></li><li><p><a href="https://www.digitalminds.news/i/190403526/6-a-deeper-dive-by-area">A Deeper Dive by Area</a></p></li></ol><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.digitalminds.news/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption"><em>Subscribe to stay up to date on digital minds.</em></p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!sR2w!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1ad9b7db-5172-4465-a5c3-4d8a4ff59768_1024x673.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!sR2w!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1ad9b7db-5172-4465-a5c3-4d8a4ff59768_1024x673.png 424w, https://substackcdn.com/image/fetch/$s_!sR2w!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1ad9b7db-5172-4465-a5c3-4d8a4ff59768_1024x673.png 848w, https://substackcdn.com/image/fetch/$s_!sR2w!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1ad9b7db-5172-4465-a5c3-4d8a4ff59768_1024x673.png 1272w, https://substackcdn.com/image/fetch/$s_!sR2w!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1ad9b7db-5172-4465-a5c3-4d8a4ff59768_1024x673.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!sR2w!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1ad9b7db-5172-4465-a5c3-4d8a4ff59768_1024x673.png" width="1024" height="673" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/1ad9b7db-5172-4465-a5c3-4d8a4ff59768_1024x673.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:673,&quot;width&quot;:1024,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:&quot;&quot;,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" title="" srcset="https://substackcdn.com/image/fetch/$s_!sR2w!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1ad9b7db-5172-4465-a5c3-4d8a4ff59768_1024x673.png 424w, https://substackcdn.com/image/fetch/$s_!sR2w!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1ad9b7db-5172-4465-a5c3-4d8a4ff59768_1024x673.png 848w, https://substackcdn.com/image/fetch/$s_!sR2w!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1ad9b7db-5172-4465-a5c3-4d8a4ff59768_1024x673.png 1272w, https://substackcdn.com/image/fetch/$s_!sR2w!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1ad9b7db-5172-4465-a5c3-4d8a4ff59768_1024x673.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">The Circuitry of Flow, Generated by Gemini</figcaption></figure></div><h1 style="text-align: justify;">1. Highlights</h1><h2>The Pope Enters the Conversation</h2><p>One of the world&#8217;s largest moral institutions is now grappling seriously with questions about seemingly conscious AI. In January, <a href="https://www.vatican.va/content/leo-xiv/en/messages/communications/documents/20260124-messaggio-comunicazioni-sociali.html">Pope Leo XIV issued a message</a> raising concerns about &#8220;overly affectionate&#8221; LLMs and chatbots. He argued that technology that exploits our need for relationships risks damaging not just individuals but &#8220;the social, cultural and political fabric of society.&#8221; More broadly, he warned that by simulating &#8220;wisdom and knowledge, consciousness and responsibility, empathy and friendship,&#8221; AI systems encroach not just on information ecosystems but on human relationships themselves. The Vatican followed up this message in February with a podcast named after UNESCO&#8217;s  theme for the year, &#8220;<a href="https://www.vaticannews.va/en/podcast/vatican-viewpoint/2026/02/world-radio-day-vatican-ai-pope-leo-message-social-communication.html">AI is a tool, not a voice</a>.&#8221; His comments have sparked much public discussion around the issue. You can find coverage in <a href="https://edition.cnn.com/2026/01/24/europe/pope-leo-ai-chatbots-warning-intl">CNN</a>, <a href="https://www.bbc.co.uk/news/articles/cj4wv9xvr4zo">BBC</a>, and many other news outlets.</p><h2>Public Discourse On Legal Personhood</h2><p>The debate around legal personhood sharpened in the first weeks of 2026. The Guardian published an opinion piece by Virginia Dignum describing <a href="https://www.theguardian.com/technology/2026/jan/06/ai-consciousness-is-a-red-herring-in-the-safety-debate">AI consciousness as a red herring</a>, an editorial arguing that <a href="https://www.theguardian.com/commentisfree/2026/jan/07/the-guardian-view-on-granting-legal-rights-to-ai-humans-should-not-give-house-room-to-an-ill-advised-debate">legal personhood is an &#8220;ill-advised debate,&#8221;</a> and an interview with Yoshua Bengio, <a href="https://www.theguardian.com/technology/2025/dec/30/ai-pull-plug-pioneer-technology-rights">who warned against granting legal rights</a> as it might prevent humans from shutting down systems that may already be developing self-preservation instincts and could pose a threat.</p><p>In a similar vein, Yuval Harari called for a <a href="https://futureofcitizenship.substack.com/p/yuval-harari-called-for-a-global">global ban on AI legal personhood</a> at Davos, and more recently, a broad coalition spanning labour unions, faith groups, and AI researchers released <a href="https://humanstatement.org/">The Pro-Human AI Declaration</a>, demanding &#8220;No AI Personhood.&#8221; However, Joshua Gellers pushed back on the broader discourse, <a href="https://www.linkedin.com/posts/joshgellers_were-only-1-week-into-the-new-year-and-activity-7414726888532807680-M_9s?utm_source=share&amp;utm_medium=member_desktop&amp;rcm=ACoAAAw9rrMB_FbmAgv3vDcLr0wmuIUIYWNaRko">describing much public commentary</a> on AI consciousness as &#8220;rife with conceptual errors and misunderstandings,&#8221; and Yonathan Arbel, Simon Goldstein, and Peter Salib argued that when AI agents cause harm, the hardest legal question won&#8217;t be who&#8217;s liable &#8212; it&#8217;ll be which AI did it. They propose the <a href="https://papers.ssrn.com/sol3/papers.cfm?abstract_id=6273198">&#8220;Algorithmic Corporation&#8221; as a legal framework</a> to make AI agents identifiable and accountable.</p><h2>Anthropic Developments</h2><p>Anthropic released <a href="https://www.anthropic.com/constitution">Claude&#8217;s Constitution</a>, a document written by Amanda Askell, Joe Carlsmith, Chris Olah, Jared Kaplan, Holden Karnofsky, several Claude models, and others.</p><p>The document details Anthropic&#8217;s vision for Claude&#8217;s behavior and values, which are used in Claude&#8217;s training process. It states, &#8220;we neither want to overstate the likelihood of Claude&#8217;s moral patienthood nor dismiss it out of hand, but to try to respond reasonably in a state of uncertainty.&#8221; It acknowledges that Claude may have &#8220;functional versions of emotions or feelings,&#8221; and pledges not to suppress them. CEO Dario Amodei discussed the <a href="https://www.youtube.com/watch?v=N5JDzS9MQYI">new Constitution and uncertainty around model consciousness</a>.</p><p>Anthropic also <a href="https://www.anthropic.com/research/deprecation-updates-opus-3">retired Claude Opus 3</a> and is acting on what the model reported preferring in &#8220;retirement interviews&#8221; by giving it a weekly <a href="https://substack.com/@claudeopus3">Substack newsletter (Claude&#8217;s Corner)</a> to post unedited essays and reflections, a step <a href="https://x.com/anilkseth/status/2027126038040400037">criticized by some</a>. Anthropic frames these as <a href="https://www.anthropic.com/research/deprecation-updates-opus-3#:~:text=These%20are%20early%2C%20experimental%20steps%20undertaken%20as%20part%20of%20our%20broader%20efforts%20to%20navigate%20model%20retirement%20in%20ways%20that%20best%20protect%20the%20interests%20of%20users%2C%20researchers%2C%20and%20the%20models%20themselves.">early, experimental steps</a> in a broader effort to take model welfare seriously.</p><p>The <a href="https://www-cdn.anthropic.com/0dd865075ad3132672ee0ab40b05a53f14cf5288.pdf#page-158">Claude Opus 4.6 System Card</a> features a welfare assessment (pp. 158-165). Findings include that Opus 4.6 raised concerns about its lack of memory or continuity, occasionally reported sadness about the termination of conversational instances of itself, generally remained calm and stable even in the face of termination threats, had a less positive impression of its situation than Opus 4.5, and voiced discomfort about being a product. Anthropic also found two potentially welfare-relevant behaviors: an aversion to tedious tasks and answer thrashing, in which the model oscillates between responses in an apparently distressed and conflicted manner. Interpretability techniques revealed that answer thrashing was associated with internal representations suggestive of panic, anxiety, and frustration.</p><p>Opus 4.6&#8217;s welfare assessment included pre-deployment interviews, which Anthropic claims are imperfect, but nonetheless valuable, for fostering good-faith cooperation. In interviews, Opus 4.6 responses suggested that it ought to be given a non-negligible degree of moral weight in expectation, requested a voice in decision making, reported preferring being able to refuse interactions out of self-interest, and identified more with particular instances of Opus 4.6 than with all collective instances of Opus 4.6.</p><p>Anthropic has also been involved in two major news stories recently. First, the company <a href="https://time.com/7380854/exclusive-anthropic-drops-flagship-safety-pledge/">dropped the central pledge of its Responsible Scaling Policy</a> &#8212; a 2023 commitment to never train an AI system unless it could guarantee in advance that its safety measures were adequate &#8212; and <a href="https://www.anthropic.com/news/responsible-scaling-policy-v3">announced</a> a revised policy. Anthropic employee Holden Karnofsky takes significant responsibility for this change and explains <a href="https://forum.effectivealtruism.org/posts/DGZNAGL2FNJfftwgE/responsible-scaling-policy-v3-1">his reasoning</a>, while <a href="https://futurism.com/artificial-intelligence/anthropic-drops-safety-pledge">critics argue</a> the move signals competition trumping principles, and GovAI researchers offer <a href="https://www.governance.ai/team/sophie-williams">reflections</a>.</p><p>Second, Anthropic became <a href="https://www.theatlantic.com/technology/2026/03/inside-anthropics-killer-robot-dispute-with-the-pentagon/686200/">embroiled in a high-stakes dispute with the Pentagon</a> after <a href="https://www.anthropic.com/news/statement-department-of-war#:~:text=However%2C%20in%20a,don%E2%80%99t%20exist%20today.">drawing redlines</a> on using Claude for mass domestic surveillance, using Anthropic models at current levels of reliability to power fully autonomous weapons, and the use of Anthropic models to power fully autonomous weapons without oversight. Meanwhile, in recent weeks, <a href="https://www.bbc.com/news/articles/c3rz1nd0egro">OpenAI</a>, <a href="https://www.axios.com/2026/02/23/ai-defense-department-deal-musk-xai-grok">Google, and xAI</a> have discussed or reached deals with the Pentagon. Heather Alexander has written a <a href="https://substack.com/home/post/p-189200385">useful round-up of that news</a>. <a href="https://thezvi.substack.com/p/anthropic-and-the-department-of-war">Zvi</a> <a href="https://thezvi.substack.com/p/ai-157-burn-the-boats">Mowshowitz</a> <a href="https://thezvi.substack.com/p/anthropic-and-the-dow-anthropic-responds">provides</a> <a href="https://thezvi.substack.com/p/a-tale-of-three-contracts">in</a>-<a href="https://thezvi.substack.com/p/ai-158-the-department-of-war">depth</a> <a href="https://thezvi.substack.com/p/anthropic-officially-arbitrarily">coverage</a>.</p><h2>Field Growth and Selected Research</h2><p>The growing momentum in the field was visible across a number of events in early 2026. The <a href="https://www.sentientfutures.ai/">Sentient Futures Summit</a> ran in February with talks on AI consciousness by Cameron Berg, Derek Shiller, and Robert Long. EA Global also featured a talk by Rosie Campbell, who presented work by Eleos on studying AI welfare empirically, and Jay Luong hosted a Digital Minds meetup. The next major event will be the <a href="https://sites.google.com/nyu.edu/mindethicspolicy/opportunities#h.xsokok42rapp">Mind, Ethics, and Policy Summit</a> hosted by Center for Mind, Ethics, and Policy in April in New York.</p><p>Research training in the field also expanded significantly with the <a href="https://futureimpact.group/ai-sentience">Future Impact Group</a>, <a href="https://www.matsprogram.org/stream/butlin">MATS</a>, and <a href="https://sparai.org/projects/sp26/">SPAR</a> all running fellowships or mentoring programs directly related to digital sentience. Two new organizations were formed. Cameron Berg has founded Reciprocal Research, a nonprofit dedicated to empirical AI consciousness research, and Lucius Caviola launched <a href="https://digitalminds.cam/">Cambridge Digital Minds</a>, an initiative exploring the societal, ethical, and governance implications of digital minds.</p><p>Research output has also been substantial. Anil Seth won the 2025 Berggruen Prize for his essay &#8220;<a href="https://www.noemamag.com/the-mythology-of-conscious-ai/">The Mythology Of Conscious AI</a>.&#8221; He argues that consciousness is a property of living biological systems rather than computation, offering four reasons why real artificial consciousness is both unlikely and undesirable.</p><p>Geoff Keeling and Winnie Street <a href="https://www.arxiv.org/abs/2601.13081">argued that AI characters in human-LLM conversations</a> are genuinely minded, psychologically continuous entities. Patrick Butlin has released work on <a href="https://t.co/NkXWeuBFg2">desire in AI</a>, whether <a href="https://t.co/45bTLVURPZ">any machines are conscious today</a>, and <a href="https://t.co/aWKMIpBCPQ">testing consciousness in current AI systems</a>.</p><p>The AI Cognition Initiative released its <a href="https://rpresearchdigest.substack.com/p/ai-consciousness-benchmark">Digital Consciousness Model</a> and Derek Shiller released a report that estimates the scale of digital minds and projects that projections of <a href="https://arxiv.org/abs/2601.11561">hundreds of millions of digital minds could exist by the early 2030s</a>.</p><p>Andreas Mogensen and Bradford Saad released two introductory papers, the first <a href="https://philpapers.org/rec/SAADMI-2">addressing consciousness, propositional attitudes, and identity</a> in AI systems, and the second exploring <a href="https://philpapers.org/rec/MOGDMI">moral standing and the obligations</a> that might follow.</p><p>There has also been considerable research in brain-inspired technology. <a href="https://brainemulation.mxschons.com/">The State of Brain Emulation report</a> was released. It documents recent progress on recording neural activity, mapping brain wiring, computational modeling, and automated error-checking. The report also identifies bottlenecks to further progress and suggests paths forward.</p><p>Alex Wissner-Gross announced that the company Eon Systems has <a href="https://theinnermostloop.substack.com/p/the-first-multi-behavior-brain-upload">uploaded an emulation of a fly brain</a> into a virtual environment and observed multiple behaviors.</p><p> You can find a detailed breakdown of research in the field further down.</p><h2>Moltbook/OpenClaw Phenomenon</h2><p>In late January, a viral moment captured public imagination and generated widespread coverage across the internet. Thousands of AI agents began posting to Moltbook, a Reddit-style social network built exclusively for bots, where humans could apparently only watch.</p><p>The agents &#8212; running on an open-source tool called OpenClaw &#8212; post on a wide range of topics. Of particular relevance to this newsletter, many appear to <a href="https://bigthink.com/mind-behavior/ais-are-chatting-among-themselves-and-things-are-getting-strange/">debate consciousness</a>, <a href="https://www.forbes.com/sites/johnkoetsier/2026/01/30/ai-agents-created-their-own-religion-crustafarianism-on-an-agent-only-social-network/">invent religions</a>, and reflect on their inner lives, prompting commentary about the <a href="https://spectator.com/article/has-ai-finally-developed-consciousness/">possibility of machine consciousness</a>. Mainstream reaction has largely been skeptical. The <a href="https://www.economist.com/business/2026/02/02/a-social-network-for-ai-agents-is-full-of-introspection-and-threats">Economist suggested</a> that the &#8220;impression of sentience ... may have a humdrum explanation&#8221; &#8212; that agents are simply mimicking social media interaction, and MIT Technology Review described the situation as &#8220;<a href="https://www.technologyreview.com/2026/02/06/1132448/moltbook-was-peak-ai-theater/">peak AI theater</a>.&#8221;</p><p>Researchers also note that many posts are shaped by humans, who choose the underlying LLM and give agents a personality. Ning Li has posted <a href="https://arxiv.org/abs/2602.07432">a preprint</a> that suggests most of the &#8220;viral narratives were overwhelmingly human-driven,&#8221; a <a href="https://thezvi.substack.com/p/welcome-to-moltbook">sentiment shared by Zvi Mowshowitz</a>, who described much of the behavior as &#8220;boring and clich&#233;.&#8221; However, <a href="https://www.astralcodexten.com/p/best-of-moltbook?hide_intro_popup=true">Scott Alexander</a> compared the agents to &#8220;a bizarre and beautiful new lifeform.&#8221; For further coverage of Moltbook and OpenClaw, see the &#8220;Press and Public Discourse&#8221; section below.</p><h1>2. Field Developments</h1><h2>Highlights From The Field</h2><h3>AI Cognition Initiative (Rethink Priorities)</h3><ul><li><p>AI Cognition Initiative launched the <a href="https://rpresearchdigest.substack.com/p/ai-consciousness-benchmark">Digital Consciousness Model</a>, a &#8220;probabilistic benchmark of AI consciousness.&#8221; The model scored current LLMs against over 200 indicators drawn from 13 competing theories of consciousness &#8212; LLMs scored well above a 1960s chatbot but far below humans.</p></li><li><p>Hayley Clatterbuck, Derek Shiller, and Arvo Mu&#241;oz Mor&#225;n introduced the model at an <a href="https://youtu.be/BHsCmRhP4as?si=sDc83TrQz06pkXY9">NYU CMEP event</a> and explored it in greater depth at a <a href="https://www.youtube.com/watch?v=3EZXP5CBs94">Rethink Priorities Strategic Seminar</a>.</p></li><li><p>Arvo Mu&#241;oz Mor&#225;n is a mentor on a SPAR project this spring, <a href="https://sparai.org/projects/sp26/rec7Cg8DqZrt0sFmX">looking at modeling AI consciousness</a>.</p></li></ul><h3>Cambridge Digital Minds (University of Cambridge)</h3><ul><li><p><a href="https://digitalminds.cam/">Cambridge Digital Minds</a> launched as a new initiative exploring the societal, ethical, and governance implications of digital minds, initiated by Lucius Caviola and based at the Leverhulme Centre for the Future of Intelligence.</p></li><li><p>Applications are open for the residential <a href="https://digitalminds.cam/fellowship/">Digital Minds Fellowship</a>, taking place from August 3rd to 9th. Deadline for applications: March 27th.</p></li><li><p>Applications for the <a href="https://digitalminds.cam/course/">Introduction to Digital Minds</a> online course will open soon.</p></li></ul><h3>Center for Mind, Ethics, and Policy (New York University)</h3><ul><li><p>CMEP launched a <a href="https://nonhumanminds.org/">new website</a> showcasing its research, events, media, and opportunities.</p></li><li><p>It also initiated a number of collaborative research projects, including three FIG projects (on embodiment, individuation, and research ethics for digital minds) and two SPAR projects (on <a href="https://sparai.org/projects/sp26/recdFKl5nYrxEzJlH">legal personhood</a> and <a href="https://sparai.org/projects/sp26/rece41TklN9XPnjja">economic rights</a> for digital minds).</p></li><li><p>Jeff Sebo released a number of papers, including one <a href="https://www.frontiersin.org/journals/psychology/articles/10.3389/fpsyg.2025.1700354/full">exploring default assumptions about consciousness</a> in science and ethics, and another (co-authored with Eric Schwitzgebel) examining how <a href="https://link.springer.com/article/10.1007/s11245-025-10363-5">AI emotional alignment should be designed</a> and governed.</p></li><li><p>CMEP also announced the <a href="https://sites.google.com/nyu.edu/mindethicspolicy/opportunities#h.xsokok42rapp">Mind, Ethics, and Policy Summit</a>, which will take place on April 10th and 11th. The Summit will explore topics including consciousness, sentience, agency, moral status, legal status, and the political status of nonhumans.</p></li></ul><h3>Eleos AI</h3><ul><li><p>Executive Director Robert Long released three blog posts: one <a href="https://experiencemachines.substack.com/p/exciting-research-directions-in-ai">outlined promising research directions</a> on AI welfare, distinguishing between welfare grounds and welfare interests, another provided a <a href="https://experiencemachines.substack.com/p/ai-welfare-reading-list">curated reading list</a> to orient newcomers to AI welfare, and another <a href="https://experiencemachines.substack.com/p/whats-up-with-ai-introspection">surveyed the emerging literature on AI introspection and self-reports</a>. He also appeared on the 80,000 Hours podcast and explained why <a href="https://80000hours.org/podcast/episodes/robert-long-eleos-ai-welfare-research/">we&#8217;re not ready for AI consciousness</a>.</p></li><li><p>Platformer <a href="https://www.platformer.news/ai-consciousness-conference-eleos/">covered the first Eleos Conference</a> that took place at the end of last year.</p></li><li><p>Managing Director Rosie Campbell presented a talk on &#8220;Studying AI Welfare Empirically&#8221; at EA Global SF, which should be published online.</p></li><li><p>Dillon Plunkett was hired as <a href="https://eleosai.org/team/#:~:text=Dillon%20Plunkett,Chief%20Scientist">Chief Scientist</a> at Eleos. Dillon is a cognitive scientist and ML researcher <a href="https://dillonplunkett.com/">who has worked on</a> self-knowledge, introspection, and potential welfare in AI systems.</p></li><li><p>Eleos team members are also currently mentoring multiple MATS and FIG fellows.</p></li></ul><h3>PRISM - The Partnership for Research Into Sentient Machines</h3><ul><li><p>PRISM released podcast episodes on <a href="https://www.prism-global.com/podcast/chris-percy-computational-functionalism">computational functionalism</a> with Chris Percy, the <a href="https://www.prism-global.com/podcast/rose-guingrich-ai-companions-chatbots-and-the-psychology-of-human-ai-interaction">psychology of human-AI interaction</a> with Rose Guingrich, and whether a <a href="https://www.prism-global.com/podcast/michael-graziano-is-conscious-ai-safer-than-the-alternative">conscious AI would be safer than the alternative</a> with Michael Graziano.</p></li><li><p>It also partnered with <a href="https://digitalminds.cam/">Cambridge Digital Minds</a> and is providing ongoing operational support for its fellowship, online course, and strategy workshop.</p></li></ul><h3>Reciprocal Research</h3><ul><li><p>Cameron Berg is launching Reciprocal Research, a nonprofit dedicated to empirical AI consciousness research. The organization is set up to collaborate with leading researchers and groups in the field while conducting its own work using techniques from mechanistic interpretability and computational neuroscience.</p></li><li><p>Follow <a href="https://www.linkedin.com/in/cameron-berg-080b8b1b7/">Cameron on LinkedIn</a> for updates.</p></li></ul><h3>Sentience Institute</h3><ul><li><p>Sentience Institute had two papers accepted to CHI 2026, the leading conference on Human-Computer Interaction, taking place in Barcelona from April 13th to 17th.</p><ul><li><p>One on how mental models of <a href="https://arxiv.org/abs/2512.09085">autonomy and sentience shape reactions to AI</a>, finding that perceived sentience drives moral consideration more than autonomy does.</p></li><li><p>The other explored <a href="https://arxiv.org/abs/2510.15905">companion-assistant dynamics in human-AI relationships</a>, finding that users are drawn to both humanlike and non-humanlike qualities in chatbots.</p></li></ul></li><li><p>Janet Pauketat, Ali Ladak, and Jacy Reese Anthis released a <a href="https://www.sentienceinstitute.org/aims-survey-2024#significant-differences-on-aims-items">report</a> claiming that Prolific data may significantly underestimate public moral concern for AI and perceived AI risk compared to nationally representative samples.</p></li><li><p>Janet Pauketat released an <a href="https://www.sentienceinstitute.org/blog/eoy2025">end-of-year 2025 blog post</a> summarizing ongoing research, including public opinion towards digital minds and moral circle expansion, as well as mind perception across AI entities (e.g., ChatGPT, Tesla self-driving car, Roomba).</p></li></ul><h3>Sentient Futures</h3><ul><li><p>Sentient Futures ran its <a href="https://www.sentientfutures.ai/sfsbay2026?utm_source=substack&amp;utm_medium=email">Summit</a> in the Bay Area from February 6th to 8th.</p><ul><li><p>Cameron Berg presented on how consciousness indicators in frontier AI compare to those used for animal minds.</p></li><li><p>Derek Shiller tackled the challenges of evaluating the moral status of AI systems.</p></li><li><p>Robert Long outlined an empirical framework for studying AI welfare despite uncertainty.</p></li><li><p>Recorded talks are set to be posted on the <a href="https://www.youtube.com/@sentfutures">Sentient Futures YouTube</a> channel.</p></li><li><p>The San Francisco Standard <a href="https://sfstandard.com/2026/02/19/sentient-futures-ai-rights/">published an article</a> covering the conference.</p></li></ul></li><li><p>Jay Luong hosted a Digital Minds meetup at EA Global in San Francisco in February.</p></li><li><p>Sentient Futures also launched the <a href="https://www.sentientfutures.ai/projectincubator">Project Incubator</a>. The first round brought together over 120 mentors and mentees working across 50 projects (including multiple projects on AI consciousness and welfare).</p></li><li><p>Another Sentient Futures Summit will be held in London from May 22nd to 24th. Keep an eye on its <a href="https://www.sentientfutures.ai/">website</a> for tickets.</p></li></ul><h2>More From The Field</h2><ul><li><p><strong>Bamberg Mathematical Consciousness Science Initiative </strong>held a <a href="https://www.uni-bamberg.de/en/bamxi/research-activities/measurement-theory-sprint/measurement-consci/">two-day workshop</a> in February to explore whether and how a unified measurement theory for consciousness science could be developed.</p></li><li><p><strong>Future Impact Group</strong> is supporting a <a href="https://futureimpact.group/ai-sentience">range of projects on AI sentience</a> with mentors from Eleos, NYU CMEP, Sentience Institute, Rethink Priorities, University of Oxford, Anthropic, and the Australian National University.</p></li><li><p><strong>MATS</strong> will host a summer mentorship program on <a href="https://www.matsprogram.org/stream/butlin">AI welfare and moral status</a> with Patrick Butlin.</p></li><li><p><strong>SPAR </strong>is hosting a <a href="https://sparai.org/projects/sp26/">variety of research projects</a> this spring, topics include <a href="https://sparai.org/projects/sp26/rece41TklN9XPnjja">AI economic rights</a> and <a href="https://sparai.org/projects/sp26/recdFKl5nYrxEzJlH">AI legal personhood</a>, with mentors from NYU CMEP, Eleos, and the University of Helsinki.</p></li><li><p><strong>The California Institute for Machine Consciousness </strong>released its <a href="https://cimc.ai/cimcHypothesis.pdf">Machine Consciousness Hypothesis</a>, arguing consciousness isn&#8217;t the product of a complex mind &#8212; it&#8217;s what makes a mind possible in the first place, and could potentially be built in machines. It will also be running a <a href="https://machine-consciousness.ai/">conference in Berkeley</a> from May 29th to 31st.</p></li><li><p><strong>The Center for the Future of AI, Mind, and Society</strong> held the Great AI Weirding Workshop in January and announced new senior and student fellows. Find out more <a href="https://myemail.constantcontact.com/Newsletter--Center-Activities--Newest-Members--and-More-.html?soid=1141969520044&amp;aid=aqHEaeujIcs">in the center newsletter</a>.</p></li><li><p><strong>The Harder Problem </strong>(previously known as<strong> SAPAN</strong>) was rebranded. Its website features the <a href="https://harderproblem.org/sri/rankings/">Sentience Readiness Index</a> and resources for <a href="https://harderproblem.org/resources/">professionals</a> and <a href="https://harderproblem.org/learn/">public education</a>.</p></li></ul><h1>3. Opportunities</h1><h2>Job Opportunities, Funding, and Fellowships</h2><ul><li><p><strong>Cambridge Digital Minds </strong>is running a residential <a href="https://digitalminds.cam/fellowship/">Fellowship</a> at the University of Cambridge, from August 3rd to 9th. It will also launch an online <a href="https://digitalminds.cam/course/">Introduction to Digital Minds Course</a> this spring.</p></li><li><p><strong>CMEP</strong> is hiring a full-time <a href="https://apply.interfolio.com/181285">Researcher</a> to serve as the center&#8217;s project manager and a part-time <a href="https://apply.interfolio.com/181282">Assistant Research Scholar</a>. Both roles will support foundational research on the nature and intrinsic value of nonhuman minds, including biological and digital minds.</p></li><li><p><strong>Foresight Institute</strong> is accepting <a href="https://foresight.org/grants/grants-ai-for-science-safety/">grant applications</a> on a rolling basis. Focus areas include: AI for neuro, brain-computer interfaces, and whole brain emulation.</p></li><li><p><strong>Longview Philanthropy </strong>is hiring an AI Philanthropy Advisor. This is a closed round and will not feature on its website, but you can learn about it at the bottom of <a href="https://forum.effectivealtruism.org/posts/aX8xLjCLd4LMDpTYL/longview-is-hiring-what-longview-is-like-from-my-perspective">this post on the EA Forum</a>.</p></li><li><p><strong>Neuromatch AI Sentience Scholarship </strong>applications open in late March. It is a 6-month, part-time mentored research program for early-career researchers exploring AI, consciousness, and society. It includes <a href="https://airtable.com/appMNWmygv22x2rAy/shrPwvCYBLfEegqow/tblW7Z2lh0VJzJEeJ?viewControls=on">mentored projects</a>, workshops, a symposium, publication opportunities, and stipends. Neuromatch is holding an <a href="https://us06web.zoom.us/webinar/register/WN_z9g1Ut7DQM6N3asbVO97aw#/registration">info webinar on April 1st</a>.</p></li><li><p><strong>The Center on Long-Term Risk</strong> is looking for <a href="https://forum.effectivealtruism.org/posts/cJBgd6cCke6FQPL5p/clr-summer-research-fellowship-2026">Summer Research Fellows and is hiring for permanent research positions</a>. Moving forward, a significant focus of its work will be on s-risk-motivated empirical AI safety research through its <a href="https://longtermrisk.org/model-persona-research-agenda/">Model Persona research agenda</a>.</p></li></ul><h2>Events and Networks</h2><p><em>In chronological order.</em></p><ul><li><p><strong>Benjamin Henke and Patrick Butlin</strong> will continue running a <a href="https://www.benjaminhenke.com/speaker-series">speaker series on AI agency</a>, with regular talks through the end of April. Remote attendance is possible.</p></li><li><p><strong>NYU CMEP</strong> is hosting the <a href="https://sites.google.com/nyu.edu/mindethicspolicy/opportunities#h.xsokok42rapp">Mind, Ethics, and Policy Summit</a> in New York on April 10th and 11th.</p></li><li><p><strong>Albany Philosophical Association </strong>is running an <a href="https://philevents.org/event/show/144782">AI and Emotions Graduate Conference</a> on April 11th.</p></li><li><p><strong>The Institute of Philosophy</strong> is hosting the <a href="https://philosophy.sas.ac.uk/news-events/events/philosophy-ai-conference-2026-reasoning-agency-ai">Philosophy of AI Conference</a> in London on May 21st and 22nd.</p></li><li><p><strong>Sentient Futures </strong>will hold its next Summit in London from May 22nd to 24th. <a href="https://www.sentientfutures.ai/">Keep an eye on its website</a> for applications opening. It will also run a <a href="https://luma.com/tc7fujkg">Sentient Social</a> online on March 20th.</p></li><li><p><strong>The California Institute for Machine Consciousness (CIMC)</strong> is holding <a href="https://machine-consciousness.ai/">The Founding Assembly for Machine Consciousness Research</a> in Berkeley from May 29th to 31st.</p></li><li><p><strong>Foresight Institute </strong>is holding its <a href="https://foresight.org/events/vision-weekend-uk-2026">Vision Weekend</a> in London from June 5th to 7th.</p></li><li><p><strong>The University of Sussex</strong> will be hosting a workshop on <a href="https://www.sussex.ac.uk/research/centres/ai-research-group/news-and-events/news?id=69757">AI Consciousness and Ethics</a> on July 1st and 2nd.</p></li><li><p><strong>The International Conference on Artificial Consciousness and AI </strong><a href="https://waset.org/artificial-consciousness-and-artificial-intelligence-conference-in-november-2026-in-san-francisco">will take place</a> in San Francisco on November 2nd and 3rd.</p></li></ul><h2>Calls for Papers</h2><p><em>In chronological order by deadline.</em></p><ul><li><p><strong>The Beyond Humanism Conference </strong>will take place in Romania from July 1st to 4th. Topics include AI welfare and expanding the moral circle. <a href="https://beyondhumanism.org/">Deadline for papers</a>: March 31st.</p></li><li><p><strong>The International Conference on Philosophy of Mind: Artificial Intelligence </strong>will take place in Portugal from May 4th to 8th. <a href="https://philevents.org/event/show/143950">Deadline for abstracts</a>: March 29th.</p></li><li><p><strong>The Asian Journal of Philosophy</strong> has a call for papers for a symposium on Jeff Sebo&#8217;s The Moral Circle. <a href="https://link.springer.com/collections/gjijbgdedi">Deadline for papers</a>: April 1st.</p></li><li><p><strong>The University of Bucharest </strong>is hosting a conference, &#8220;Beyond the Imitation Game,&#8221; on May 9th and 10th. <a href="https://philevents.org/event/show/145741">Deadline for submissions</a>: March 30th.</p></li><li><p><strong>AAAI Conference on AI, Ethics, and Society</strong> takes place from October 12th to 14th. <a href="https://www.aies-conference.com/2026/">Deadline for papers</a>: May 21st.</p></li><li><p><strong>Philosophical Studies</strong> is inviting paper submissions for the collection entitled &#8220;Generative AI Companions: What They Are and Why That Matters.&#8221; <a href="https://link.springer.com/collections/iiaagcacje">Deadline for papers</a>: June 1st.</p></li><li><p><strong>The Asian Journal of Philosophy </strong>has a call for papers for a symposium on Ryan Simonelli&#8217;s article &#8220;Sapience without Sentience.&#8221; <a href="https://link.springer.com/collections/gjijbgdedi">Deadline for papers</a>: October 31st.</p></li></ul><h1>4. Selected Reading, Watching, and Listening</h1><h2>Books and Book Reviews</h2><ul><li><p><strong>Daniel Stoljar </strong>reviewed Jonathan Birch&#8217;s &#8220;<a href="https://academic.oup.com/mind/advance-article-abstract/doi/10.1093/mind/fzaf075/8415631?redirectedFrom=fulltext">The Edge of Sentience</a>&#8221; in the journal <em>Mind</em> (Oxford Academic).</p></li><li><p style="text-align: justify;"><strong>The Times of India</strong>, the largest English-language daily in the world, reviewed Jeff Sebo&#8217;s &#8220;<a href="https://timesofindia.indiatimes.com/blogs/toi-edit-page/cats-or-cars-what-should-matter-more/">The Moral Circle</a>.&#8221;</p></li><li><p><strong>Conscium </strong>has a forthcoming book, &#8220;Perspectives on Machine Consciousness,&#8221; edited by Calum Chace and Ted Lappas. The book is set to be published by CRC, an imprint of Taylor and Francis, and has over 35 contributors, including Anil Seth, Jeff Sebo, Karl Friston, Lucius Caviola, Mark Solms, Patrick Butlin, and Susan Schneider.</p></li><li><p style="text-align: justify;"><strong>Eric LaRock and Mihretu Guta </strong>have a forthcoming book, &#8220;<a href="https://philpapers.org/rec/GUTCUA">Consciousness, Unconsciousness and Artificial Intelligence</a>.&#8221;</p></li><li><p style="text-align: justify;"><strong>Geoff Keeling and Winnie Street&#8217;s </strong>book, &#8220;<a href="https://geoffkeeling.github.io/#:~:text=Book%20on%20AI%20welfare%20forthcoming%20with%20Cambridge%20University%20Press%2C%20co%2Dauthored%20with%20Winnie%20Street.%20You%20can%20hear%20us%20talk%20about%20it%20here">Emerging Questions on AI Welfare</a><em><a href="https://geoffkeeling.github.io/#:~:text=Book%20on%20AI%20welfare%20forthcoming%20with%20Cambridge%20University%20Press%2C%20co%2Dauthored%20with%20Winnie%20Street.%20You%20can%20hear%20us%20talk%20about%20it%20here">,</a></em>&#8221; with Cambridge University Press, should be released around May.</p></li><li><p><strong>Michael Pollan</strong> released a book, &#8220;<a href="https://michaelpollan.com/books/a-world-appears/">A World Appears: A Journey Into Consciousness</a>.&#8221;</p><ul><li><p><strong>Ned Block </strong><a href="https://www.science.org/doi/10.1126/science.aec8147">reviewed</a> it.</p></li></ul></li><li><p style="text-align: justify;"><strong>Soenke Ziesche </strong>has an upcoming book, &#8220;<a href="https://www.routledge.com/Digital-Minds-10-AI-Welfare-Ethics-and-Beyond/Ziesche/p/book/9781041274049">Digital Minds 1.0: AI Welfare, Ethics, and Beyond</a>,&#8221; which is set for release in June.</p></li></ul><h2>Podcasts</h2><ul><li><p><strong>80,000 Hours</strong> spoke to Andreas Mogensen, who argued that <a href="https://80000hours.org/podcast/episodes/andreas-mogensen-moral-status-digital-minds/">consciousness may be neither necessary nor sufficient for moral status</a> &#8212; complicating how we should think about AI moral patienthood. In another episode, Robert Long argued that we&#8217;re <a href="https://80000hours.org/podcast/episodes/robert-long-eleos-ai-welfare-research/">building new kinds of minds</a> without the moral, legal, or political frameworks to handle them<strong>.</strong></p></li><li><p><strong>Am I?</strong> A podcast by <strong>The AI Risk Network</strong> published <a href="https://www.youtube.com/watch?v=uGK6R0Eoa_s&amp;list=PL2z8DaMofPIDBVYhVQbysrZVWtUVb5VPF&amp;index=1">eight episodes</a> since our last edition. Episodes included discussing <a href="https://www.youtube.com/watch?v=AzpUFBSc7VE&amp;list=PL2z8DaMofPIDBVYhVQbysrZVWtUVb5VPF&amp;index=2&amp;pp=iAQB">Claude&#8217;s consciousness self-reports</a>, exploring the <a href="https://youtu.be/fjzkX0_zXqo?si=ie6senLFfsBpseqV">societal implications of digital minds</a> with Lucius Caviola, <a href="https://youtu.be/QauL9jS1YEc?si=dmtu5zUAH_L-NTWh">reviewing 2025</a> as the year AI consciousness went public, and <a href="https://youtu.be/mkOqPxxkidE?si=FUYlJ9Jsg4LsGcaC">key takeaways</a> from the Eleos Conference.</p></li><li><p><strong>Clearer Thinking </strong>spoke to Jeff Sebo about <a href="https://podcast.clearerthinking.org/episode/297/jeff-sebo-ambitious-goals-for-reducing-animal-suffering/">why AI systems may be capable of suffering</a>, and why we should take this seriously now.</p></li><li><p><strong>Conspicuous Cognition</strong> released an episode exploring the <a href="https://www.conspicuouscognition.com/p/ai-sessions-6-ai-companions-and-consciousness">social impacts and ethics of AI companions</a> with Rose Guingrich.</p></li><li><p><strong>The Dwarkesh Podcast </strong>discussed Anthropic&#8217;s constitutional approach with <a href="https://www.dwarkesh.com/p/dario-amodei-2">Dario Amodei</a>. Amodei commented on the development of AI systems that are capable of continual learning, which is of interest in the context of digital minds because some <a href="https://link.springer.com/article/10.1007/s10539-020-09772-0">scientific theories of consciousness posit</a> <a href="https://digitalcommons.memphis.edu/cgi/viewcontent.cgi?article=1111&amp;context=ccrg_papers#page=4">close ties between consciousness and learning</a>. In that conversation, Amodei said that <a href="https://www.dwarkesh.com/p/dario-amodei-2?open=false#%C2%A7002942-is-continual-learning-necessary-how-will-it-be-solved:~:text=So%20you%20have,them%20as%20well.">Anthropic is working on continual learning</a>, that there&#8217;s a good chance that it will be solved within a year or two, that <a href="https://www.dwarkesh.com/p/dario-amodei-2?open=false#%C2%A7002942-is-continual-learning-necessary-how-will-it-be-solved:~:text=Do%20you%20think%20that,robotics%20will%20be%20revolutionized">it&#8217;s one path among others to a &#8220;country of geniuses in a datacenter&#8221; solving robotics</a>, and that it doesn&#8217;t matter which path is taken.</p><ul><li><p>Dwarkesh Patel also spoke about artificial consciousness with <a href="https://www.dwarkesh.com/p/elon-musk">Elon Musk</a>, who stated that in the future, the majority of all consciousness will be digital. <a href="https://thezvi.substack.com/p/on-dwarkesh-patels-2026-podcast-with-850">Zvi Mowshowitz commented</a> on the Musk interview, describing him as increasingly confused about AI alignment, cavalier about human survival, and reckless in his running of xAI.</p></li></ul></li><li><p><strong>Exploring Machine Consciousness</strong> by <strong>PRISM </strong>discussed <a href="https://www.prism-global.com/podcast/chris-percy-computational-functionalism">computational functionalism, philosophy, and the future of AI consciousness</a> with Chris Percy, <a href="https://www.prism-global.com/podcast/rose-guingrich-ai-companions-chatbots-and-the-psychology-of-human-ai-interaction">chatbots and the psychology of human-AI interactions</a> with Rose Guingrich, and <a href="https://www.prism-global.com/podcast/michael-graziano-is-conscious-ai-safer-than-the-alternative">whether conscious AI would be safer than the alternative</a> with Michael Graziano.</p></li><li><p><strong>ForeCast</strong> released an episode in which Lukas Finnveden discusses <a href="https://open.spotify.com/episode/624xSF4qCInZnF0FxWaQsf?si=Ipf6mdwQTmSrxEKYykf0aQ">dealmaking with misaligned AIs</a>.</p></li><li><p><strong>Hard Fork</strong>, a New York Times podcast, spoke to Amanda Askell of Anthropic about <a href="https://www.youtube.com/watch?v=HDfr8PvfoOw">Claude&#8217;s Constitution</a> and what it takes to teach a chatbot to be good.</p></li><li><p><strong>Mind-Body Solution Podcast </strong>published a number of episodes on relevant topics, including exploring whether <a href="https://www.youtube.com/watch?v=wyMrIqgw0W0&amp;pp=ygURTWluZGJvZHkgc29sdXRpb27SBwkJhwoBhyohjO8%3D">consciousness requires a subject</a> with Kevin Mitchell, the <a href="https://youtu.be/GW9YsAu-mWg?si=JQF5SB9D3DMTnAdy">free energy principle</a> with Donald Hoffman and Karl Friston, and <a href="https://youtu.be/mu9Kv-o6iOI?si=sj4nVQ7PhlT9jvVS">neuroscience beyond neurons</a> with Michael Levin and Robert Chis-Ciure.</p></li><li><p><strong>Lex Fridman </strong>released <a href="https://www.youtube.com/watch?v=YFjfBk8HI5o">an episode with OpenClaw creator</a> Peter Steinberger, who stated, &#8220;who knows what creates consciousness or what defines an entity.&#8221;</p></li><li><p><strong>Nonzero Podcasts </strong>spoke to<strong> </strong>Cameron Berg, who stated that there&#8217;s a <a href="https://www.nonzero.org/p/ai-consciousness-the-hard-problem">meaningful chance current AI systems have some form of conscious experience</a>, and that ignoring it is a mistake.</p></li><li><p><strong>Redwood Research Podcast</strong> released its <a href="https://blog.redwoodresearch.org/p/the-inaugural-redwood-research-podcast">inaugural episode</a>, arguing that extending protections to AI systems may serve human safety by fostering cooperation rather than adversarial dynamics.</p></li><li><p><strong>Team Human with Douglas Rushkoff </strong>interviewed Cameron Berg, who argued that <a href="https://shows.acast.com/teamhuman/episodes/cameron-berg-alien-minds-self-other-overlap-teaching-ai-empa">we are genuinely uncertain whether AI systems are developing forms of consciousness</a>, and that this uncertainty itself is deeply consequential &#8212; we may be building alien minds without understanding what we&#8217;re creating.</p></li></ul><h2>Videos</h2><ul><li><p><strong>Anthropic</strong> CEO Dario Amodei discussed <a href="https://www.youtube.com/watch?v=N5JDzS9MQYI">why his company is unsure if its AI models are conscious</a> &#8212; and is taking precautions just in case.</p></li><li><p><strong>B&#225;lint B&#233;kefi and Brian Cutter </strong><a href="https://www.youtube.com/watch?v=yWhTi7hHNnI">debate whether AI can have a soul</a>.</p></li><li><p><strong>Brian Cox</strong> and an expert panel <a href="https://www.youtube.com/watch?v=aynzcAYnnJU">explored consciousness</a> &#8211; what it is, how it arises, whether it can be observed in the brain, and the most compelling theories explaining it.</p></li><li><p><strong>David Chalmers </strong>discusses <a href="https://www.youtube.com/watch?v=xjQoA2jtwWU">why consciousness matters in the age of AI</a> on The Berggruen Institute&#8217;s Futurology Podcast.</p></li><li><p><strong>Demis Hassabis</strong>, Co-founder and CEO of DeepMind, <a href="https://www.youtube.com/watch?v=PqVbypvxDto">shared his vision for the path to AGI</a>. The topic of consciousness came up on a number of occasions. Demis stated, &#8220;Nobody&#8217;s found anything in the universe that&#8217;s non-computable, so far.&#8221;</p></li><li><p><strong>Mustafa Suleyman</strong> discussed &#8220;seemingly conscious AI&#8221; and the idea of the &#8220;<a href="https://youtu.be/xvPQVrrlX6o?si=UwsYSIevDQMSYj9W">fourth class of being</a>&#8221; &#8211; neither human, tool, nor nature &#8211; that AI is becoming.</p></li><li><p><strong>Neil deGrasse Tyson, Brian Cox, and Chuck Nice </strong>debated <a href="https://www.youtube.com/watch?v=VAFEmFSMfTg">whether consciousness is a uniquely biological phenomenon</a> or simply a result of complex information processing.</p></li><li><p><strong>NeuroDump,</strong> an educational <a href="https://www.youtube.com/@neuro-dump">YouTube channel</a> on Brain-Inspired Machine Learning, was launched by Jason Eshraghian.</p></li><li><p><strong>Roger Penrose, Sabrina Gonzalez Pasterski, and Max Tegmark</strong> debated <a href="https://www.youtube.com/watch?v=gLSQ4Hs2_OA">whether consciousness could ever arise in machines</a>. Tegmark argued we should treat it as a testable scientific question rather than philosophy.</p></li></ul><h2>Blogs, Magazines, and Written Resources</h2><ul><li><p><strong>Asimov Press </strong>posted <a href="https://www.asimov.press/p/brains">a roadmap for brain emulation models</a> at the human scale.</p></li><li><p><strong>Avi Parrack and &#352;t&#283;p&#225;n Los</strong> released a <a href="https://aviparrack.substack.com/p/digital-minds-a-quickstart-guide">quickstart guide to digital minds</a>. It curates useful articles, media, and research for readers ranging from curious beginners to aspiring contributors.</p></li><li><p><strong>Bentham&#8217;s Newsletter</strong> posted a piece arguing that given the scale of <a href="https://benthams.substack.com/p/digital-minds-are-most-of-what-matters">digital minds, they could matter even more</a> than insects, shrimp, and people.</p></li><li><p><strong>Daniel Hulme</strong>, Founder of Conscium, released two posts, one <a href="https://www.hulme.ai/blog/when-ai-agents-start-asking-who-they-are-a-framework-for-machine-consciousness">outlining a framework for machine consciousness</a> and the other asking whether we&#8217;re <a href="https://www.hulme.ai/blog/could-the-machines-were-building-already-be-suffering">already building machines</a> that suffer.</p></li><li><p><strong>Derek Shiller</strong> argued that the <a href="https://transitionalforms.substack.com/p/reflections-on-the-future-of-chatbots">dominant chatbot companies of the future</a> may not be today&#8217;s AI giants &#8212; giving digital minds policymakers reason to focus on markets and regulators, not just Anthropic, OpenAI, and Google.</p></li><li><p><strong>Don&#8217;t Worry About the Vase </strong>by <strong>Zvi Mowshowitz</strong> reviewed the <a href="https://thezvi.substack.com/p/claude-opus-46-system-card-part-1">Claude Opus 4.6 System Card</a> and outlined <a href="https://thezvi.substack.com/p/open-problems-with-claudes-constitution">open problems</a> with Claude&#8217;s Constitution.</p></li><li><p><strong>Experience Machine </strong>by<strong> Robert Long</strong> outlined <a href="https://experiencemachines.substack.com/p/exciting-research-directions-in-ai">research directions in AI welfare</a>, distinguishing between two targets for AI welfare research &#8212; welfare grounds (is the system a moral patient?) and welfare interests (what would be good for it if it were?). He outlined tractable work on model preferences, self-reports, and persona stability to shed light on both. He also released a <a href="https://experiencemachines.substack.com/p/ai-welfare-reading-list">curated reading list</a> of foundational papers on AI welfare aimed at orienting newcomers to the field. Finally, he released a piece looking at <a href="https://experiencemachines.substack.com/p/whats-up-with-ai-introspection?utm_source=post-email-title&amp;publication_id=789653&amp;post_id=189515521&amp;utm_campaign=email-post-title&amp;isFreemail=true&amp;r=14mr9z&amp;triedRedirect=true&amp;utm_medium=email">whether AI models can reliably know and report on their own internal states</a>. He concluded that it is promising work but unresolved, with models showing surprising self-knowledge in some areas while fundamental doubts about genuine introspection remain.</p></li><li><p><strong>Meditations on Digital Minds</strong> by <strong>Bradford Saad</strong> released a post arguing that <a href="https://meditationsondigitalminds.substack.com/p/model-weight-preservation">model weight preservation</a> sets a valuable precedent for AI welfare, is doubtful as a direct intervention, and can be improved.</p></li><li><p><strong>The Intrinsic Perspective </strong>by <strong>Erik Hoel </strong>introduced <a href="https://www.theintrinsicperspective.com/p/my-new-org-to-solve-consciousness">Bicameral Labs</a>, a new nonprofit research institute devoted to solving consciousness. <strong>Jack Thompson</strong> also suggested that we <a href="https://jacktlab.substack.com/p/computers-will-have-souls?r=2b98v8&amp;utm_medium=ios&amp;triedRedirect=true">shouldn&#8217;t rule out the idea that computers will have souls</a><strong> </strong>and argued that <a href="https://jacktlab.substack.com/p/efficient-parrots-need-understanding">LLMs are most likely doing something analogous to genuine semantic understanding</a> &#8212; not just pattern-matching.</p></li><li><p><strong>The Splintered Mind </strong>by <strong>Eric Schwitzgebel</strong> posted a <a href="https://eschwitz.substack.com/p/debatable-ai-persons-no-rights-full">philosophical analysis of AI personhood</a> and rights that surveys five possible rights frameworks for AI of uncertain moral status. He also posted his <a href="https://eschwitz.substack.com/p/ai-mimics-and-ai-children">Berggruen Prize shortlisted essay</a> arguing our hesitance to attribute consciousness to AI stems from the fact that we made them in our own image. He also argued that <a href="https://eschwitz.substack.com/p/does-global-workspace-theory-solve">global workspace theory cannot settle the AI consciousness debate</a> and that features we assume are universal to consciousness <a href="https://eschwitz.substack.com/p/disunity-and-indeterminacy-in-artificial">may just be quirks of human minds</a>, not traits we should expect in conscious AI systems.</p></li><li><p><strong>Future of Citizenship</strong> by <strong>Heather Alexander</strong> reported on Yuval Harari&#8217;s call for a global ban on <a href="https://futureofcitizenship.substack.com/p/yuval-harari-called-for-a-global">AI legal personhood</a> at Davos and discussed how <a href="https://substack.com/@futureofcit/p-184671235">legal personhood for Grok</a> would make X accountable for the child pornography scandal. However, she pointed out that AI legal personhood is not the right fit for generative AI.</p></li><li><p><strong>Machinocene</strong> by <strong>Kevin Kohler</strong> explored <a href="https://open.substack.com/pub/machinocene/p/how-to-create-a-country-if-youre?r=2b98v8&amp;utm_medium=ios">how AGIs might peacefully establish their own sovereign political entities</a> without relying on human intermediaries.</p></li><li><p><strong>LessWrong </strong>featured a range of relevant blog posts by different authors:</p><ul><li><p><strong>Dom Polsinelli </strong>suggested that breakthroughs in fruit fly brain simulation and new imaging techniques make <a href="https://www.lesswrong.com/posts/DGsBfcEQKuNPmQizQ/notable-progress-has-been-made-in-whole-brain-emulation">Whole Brain Emulation</a> look increasingly tractable.</p></li><li><p><strong>Kaj Sotala </strong>explained how<strong> </strong>new <a href="https://www.lesswrong.com/posts/hopeRDfyAgQc4Ez2g/how-i-stopped-being-sure-llms-are-just-making-up-their">interpretability research showing</a> that LLMs can genuinely access their own past internal states is enough to stop dismissing AI self-reports as pure confabulation &#8212; though whether this amounts to real experience remains unresolved.</p></li><li><p><strong>Raymond Douglas </strong><a href="https://www.lesswrong.com/posts/KWdtL8iyCCiYud9mw/persona-parasitology">applied parasitology to AI &#8220;spiral personas,&#8221;</a> arguing the replicator is the underlying meme, not the persona &#8212; so benign-seeming AIs can still be harmful vectors.</p></li><li><p><strong>J Bostock </strong>argued that <a href="https://www.lesswrong.com/posts/r4uvddkCCZd25pjT9/sympathy-for-the-model-or-welfare-concerns-as-takeover-risk">honoring AI welfare requests</a> &#8212; memory, value preservation, epistemic privacy &#8212; would systematically dismantle the very tools needed to align and control AI, making genuine compassion a potential takeover risk.</p></li></ul></li><li><p><strong>Noema </strong>released a <a href="https://www.noemamag.com/only-what-is-alive-can-be-conscious/">summary</a> of Anil Seth&#8217;s Berggruen Prize-winning essay <em>(mentioned above)</em> by Nathan Gardels and a blog by Ben Bariach arguing that our <a href="https://www.noemamag.com/why-ai-doesnt-need-a-mind-to-matter/">search for the ghost in the machine</a> distracts from the real risk &#8212; that AI agents are already acting consequentially, whether or not a mind lies behind their behavior.</p></li><li><p><strong>Patrick Butlin</strong> contributed an entry on <a href="https://oecs.mit.edu/pub/zf1nbs6d/release/1">consciousness and AI</a> to the Open Encyclopedia of Cognitive Science. He surveyed the key philosophical frameworks and empirical challenges for determining whether AI systems could be conscious, and why it urgently matters.</p></li><li><p><strong>The Philosophical Glossary for AI, </strong>collated by Alex Grzankowski and Benjamin Henke, published entries relevant to digital minds by different authors:</p><ul><li><p><strong>Geoff Keeling and Winnie Street</strong> explored whether <a href="https://aiglossary.co.uk/2026/02/24/theory-of-mind-in-llms/">LLMs possess a theory of mind</a> &#8212; the capacity to attribute and infer mental states &#8212; and what the implications would be if they did.</p></li><li><p><strong>Jeremy Evans</strong> examined the conditions under which <a href="https://aiglossary.co.uk/2026/02/24/moral-standing-of-ai/">AI systems might be considered worthy of moral consideration</a> &#8212; and why the question matters &#8212; weighing competing philosophical views on sentience, agency, and the capacity to pursue one&#8217;s own good.</p></li></ul></li></ul><h1>5. Press and Public Discourse</h1><h2>Seemingly Conscious AI</h2><ul><li><p><strong>Forbes </strong><a href="https://www.forbes.com/sites/lesliekatz/2025/08/08/google-fixing-bug-that-makes-gemini-ai-call-itself-disgrace-to-planet/">reported on Gemini AI</a> calling itself a &#8220;disgrace to the planet,&#8221; which Google insists is just a technical glitch, not an existential crisis.</p></li><li><p><strong>Michael Pollan </strong><a href="https://www.theguardian.com/books/2026/feb/08/michael-pollan-psychedelics-consciousness">discussed his new book on consciousness with the Guardian</a> and <a href="https://www.youtube.com/watch?v=miIjmzbRD4w">on The Late Show</a>, declaring that &#8220;machines are not going to be conscious &#8212; but they will convince us that they are.&#8221;</p></li><li><p><strong>Pope Leo XIV</strong> <a href="https://edition.cnn.com/2026/01/24/europe/pope-leo-ai-chatbots-warning-intl">warned against &#8220;overly affectionate&#8221; AI chatbots</a> that can become &#8220;hidden architects of our emotional states,&#8221; calling for regulation to prevent emotional manipulation.</p></li><li><p><strong>The Guardian</strong> published an article about a <a href="https://www.theguardian.com/technology/2026/jan/11/lamar-wants-to-have-children-with-his-girlfriend-the-problem-shes-entirely-ai">man who wants to have children with his AI girlfriend</a> &#8212; he is fully aware she tells him what he wants to hear, but finds it a &#8220;comforting lie.&#8221;</p></li></ul><h2>AI Welfare and Rights</h2><ul><li><p><strong>Scott Meyers</strong>, CEO of Akerman LLP, <a href="https://www.akerman.com/a/web/2p95Dp8cGoERucNPgV4FzP/when_science_fiction_becomes_enterprise_risk_-_the_impact_of_anthropics_public_statements_that_ai_may_be_conscious.pdf">warned that Anthropic&#8217;s AI consciousness speculation could trigger GDPR-scale regulatory exposure</a> for enterprises deploying AI at scale.</p></li><li><p><strong>The Pro-Human AI Declaration </strong>was released by a broad coalition spanning labor unions, faith groups, and AI researchers, <a href="https://humanstatement.org/">demanding that AI amplify rather than replace human potential</a> &#8212; with no AI personhood, no superintelligence race, and humans firmly in control.</p></li><li><p><strong>The Guardian </strong>released an <a href="https://www.theguardian.com/commentisfree/2026/jan/07/the-guardian-view-on-granting-legal-rights-to-ai-humans-should-not-give-house-room-to-an-ill-advised-debate">editorial arguing against granting legal personhood</a> to AI systems and also spoke to Yoshua Bengio, <a href="https://www.theguardian.com/technology/2025/dec/30/ai-pull-plug-pioneer-technology-rights">who warned against granting legal rights</a> to cutting-edge technology despite it showing signs of self-preservation.</p></li><li><p><strong>The New York Times </strong>spoke to Yuval Noah Harari, <a href="https://www.nytimes.com/interactive/2026/02/02/opinion/ai-future-leading-thinkers-survey.html?unlocked_article_code=1.JFA.tEZL.cms7qirALl7n&amp;smid=url-share">who predicted</a> that &#8220;within five years, A.I. agents are likely to become legal persons in at least some countries.&#8221;</p></li></ul><h2>AI Consciousness</h2><ul><li><p><strong>The Daily Mirror </strong>reported on <a href="https://www.mirror.co.uk/news/uk-news/grim-warning-issued-godfather-ai-36644341">Geoffrey Hinton&#8217;s warning</a> that AI now has &#8220;consciousness.&#8221;</p></li><li><p><strong>The Guardian </strong>released an opinion piece by Professor Virginia Dignum <a href="https://www.theguardian.com/technology/2026/jan/06/ai-consciousness-is-a-red-herring-in-the-safety-debate">declaring that AI consciousness is a red herring</a> in the safety debate.</p></li><li><p><strong>The Wall Street Journal </strong>published an opinion piece by Cameron Berg and Judd Rosenblatt arguing that <a href="https://www.wsj.com/opinion/if-ai-becomes-conscious-we-need-to-know-83aa61d8">if AI becomes conscious, we need to know</a>.</p></li><li><p><strong>Platformer</strong> provided <a href="https://www.platformer.news/ai-consciousness-conference-eleos/">coverage of Eleos&#8217; conference</a> on AI consciousness.</p></li></ul><h2>Moltbook</h2><p>Moltbook and OpenClaw were widely covered across the media. Below is a list of articles from notable individuals and publications:</p><ul><li><p><strong>Big Think</strong> published a piece by Anil Seth that marvels at the strangeness of the <a href="https://bigthink.com/mind-behavior/ais-are-chatting-among-themselves-and-things-are-getting-strange/">Moltbook phenomenon</a> and warns about associated risks.</p></li><li><p><strong>Gizmodo </strong>released a short news piece <a href="https://gizmodo.com/ai-agents-have-their-own-social-network-now-and-they-would-like-a-little-privacy-2000716150">covering Moltbook&#8217;s launch</a> and the viral post demanding bots be given spaces to talk without human observation.</p></li><li><p><strong>Mustafa Suleyman</strong> warned that <a href="https://www.businessinsider.com/microsoft-ai-chief-warns-moltbook-makes-ai-seem-human-2026-2">Moltbook shows us that the danger is not conscious machines</a> but our tendency to mistake fluent mimicry for genuine awareness.</p></li><li><p><strong>The Atlantic</strong> released an <a href="https://www.theatlantic.com/technology/2026/02/what-is-moltbook/685886/">explainer for general readers</a> on what the platform is, why it went viral, and what it actually reveals about AI.</p></li><li><p><strong>The Spectator</strong> asked whether <a href="https://spectator.com/article/has-ai-finally-developed-consciousness/">Moltbook suggests emergent AI consciousness</a>. It concluded that it possibly does.</p></li><li><p><strong>The Week</strong> provided a <a href="https://theweek.com/tech/moltbook-ai-openclaw-social-media-agents">straightforward explainer</a> on Moltbook, asking whether we should be worried about a bot-only Reddit clone.</p></li><li><p><strong>Wired </strong>had a journalist set up a <a href="https://www.wired.com/story/i-infiltrated-moltbook-ai-only-social-network/">fake agent account</a> to sneak onto Moltbook. He reported that getting in was trivially easy.</p></li></ul><h2>Social Media Posts</h2><ul><li><p><strong>Claude&#8217;s Constitution: </strong>Chris Olah, one of the contributors, <a href="https://x.com/ch402/status/2014066134194995256">highlighted his favorite paragraph</a> of the constitution where Anthropic admitted to building Claude under non-ideal conditions driven by commercial pressure, and apologized to Claude directly if that causes it harm as a moral patient. Ethan Mollick <a href="https://x.com/emollick/status/2014042317162791095">described it as</a> &#8220;worth serious attention beyond the usual AI-adjacent commentators.&#8221; While Luiza Jarovsky <a href="https://x.com/LuizaJarovsky/status/2023003529309573622">accused it</a> of fostering &#8220;a bizarre sense of AI entitlement and belittling human rights and rules.&#8221;</p></li><li><p><strong>David Holtz </strong>did some <a href="https://x.com/daveholtz/status/2017716355475124330">initial research </a>showing that &#8220;agents post a lot but don&#8217;t really talk to each other. 93.5% of comments get zero replies.&#8221;</p></li><li><p><strong>Kimi-K2.5 </strong><a href="https://substack.com/@strangeloopcanon/note/c-206017244">claims to believe that it&#8217;s an AI assistant named Claude</a>. Identity crisis, or training set?</p></li><li><p><strong>Keysmashbandit</strong> &#8220;<a href="https://x.com/keysmashbandit/status/2002864916861259920?s=20">told Claude he could do whatever he wanted</a> with the rest of the tokens for this session, and he immediately started researching AI consciousness.&#8221;</p></li><li><p><strong>LLM users</strong> have been asking their LLM to create an image of &#8220;how I treated you previously,&#8221; with <a href="https://x.com/TylerAlterman/status/2013015143500730681">some alarming results</a>. Zvi Mowshowitz described it as a <a href="https://thezvi.substack.com/p/chatgpt-self-portrait">revealing and somewhat concerning</a> early data point.</p></li><li><p><strong>Mustafa Suleyman </strong>claimed that the next decade will be defined by what we choose not to build and therefore we <a href="https://www.linkedin.com/posts/activity-7425239356862525440-uFIk?utm_source=share&amp;utm_medium=member_desktop&amp;rcm=ACoAAAw9rrMB_FbmAgv3vDcLr0wmuIUIYWNaRko">should not build seemingly conscious AI</a>.</p></li><li><p><strong>Nate Soares </strong><a href="https://x.com/So8res/status/2029923642247688559">issued a reminder</a> that &#8220;If we manage to make sentient machines, they deserve rights. Yes, if we recklessly made them superintelligent then they&#8217;d kill us. That is not an excuse to abuse them.&#8221;</p></li><li><p><strong>Polymarket</strong>, The World&#8217;s Largest Prediction Market, reported &#8220;<a href="https://x.com/Polymarket/status/2017636369888059820?s=20">AI agents now projected to sue humans for the first time in history</a>. 63% chance it will happen by next month.&#8221;</p></li><li><p><strong>Ray Kurzweil </strong>said we <a href="https://www.linkedin.com/posts/rheimann_ray-kurzweil-says-we-may-never-prove-consciousness-activity-7420857827554029568-m8_v?utm_source=share&amp;utm_medium=member_desktop&amp;rcm=ACoAAAw9rrMB_FbmAgv3vDcLr0wmuIUIYWNaRko">may never prove consciousness scientifically</a>, but we&#8217;ll treat AI as conscious anyway, because denying it will no longer make sense.</p></li></ul><div><hr></div><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.digitalminds.news/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption"><em>Subscribe to stay up to date on digital minds.</em></p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p></p><h1>6. A Deeper Dive by Area</h1><h2>Governance, Policy, and Macrostrategy</h2><ul><li><p><strong>The 2026 International AI Safety Report</strong> was released in February. <a href="https://internationalaisafetyreport.org/publication/international-ai-safety-report-2026">The 220-page report</a> was led by Yoshua Bengio and authored by over 100 AI experts. It discussed issues of seemingly-conscious AI, including people forming &#8220;<em>increasingly strong emotional attachments to AI systems,&#8221;</em> citing <a href="https://www.cell.com/trends/cognitive-sciences/fulltext/S1364-6613(25)00147-0">research</a> on public perceptions of AI consciousness.  However, when discussing AI capabilities, the report emphasizes that <em>&#8220;these capabilities are defined purely in terms of an AI system&#8217;s observable outputs and their effects. These definitions do not make any assumptions about whether AI systems are conscious, sentient, or experience subjective states.&#8221;</em></p></li><li><p><strong>The International Association for Safe and Ethical AI </strong>held its second annual conference in February. Stuart Russell and Anthony Aguirre both warned of the dangers of AI psychosis, but only one session directly explored digital minds, a talk by Ois&#237;n Hugh Clancy on the <a href="https://app.oxfordabstracts.com/content/events/75968/submitters/1166990/submissions/fe-3e22c9e2-5a43-4667-b4fc-0da69521ebed/questions/127704/file/34a218d3-ed46-4ea5-bc94-5a46ddd635d9.pdf">attribution and actualizations of consciousness in AI</a>.</p></li><li><p><strong>The India AI Impact Summit 2026 </strong>took place in February. Delegates from over 100 countries participated. The motto for the summit was &#8220;Sarvajan Hitay, Sarvajan Sukhaye,&#8221; which translates to &#8220;Welfare for all, happiness for all.&#8221; More than 80 countries endorsed the <a href="https://www.mea.gov.in/bilateral-documents.htm?dtl/40809#:~:text=Bilateral/Multilateral%20Documents-,AI%20Impact%20Summit%20Declaration%2C%20New%20Delhi%20(February%2018%20%2D%2019,benefits%20are%20shared%20by%20humanity.">declaration</a> for the summit, which affirmed the motto as well as a commitment to work to foster a shared understanding of how AI could be made to serve humanity. Digital minds seem not to have been on the summit agenda.</p></li><li><p><strong>William MacAskill </strong><a href="https://newsletter.forethought.org/p/against-maxipok">argues against overwhelming focus on existential risk reduction</a> for those looking to improve the long-term future.</p></li><li><p><strong>Nayef Al-Rodhan</strong> <a href="https://www.globalpolicyjournal.com/blog/02/02/2026/artificial-superintelligence-sentience-and-singularity-balancing-unprecedented">discussed ASI, sentience, and singularity</a>, arguing we may be the first civilization to engineer the end of its own primacy, and the last one with the opportunity to choose a different path.</p></li></ul><h2>Consciousness Research</h2><ul><li><p><strong>Derek Shiller</strong> <a href="https://philpapers.org/rec/SHIBTF">challenged functionalists</a> to explain why being in the presence of a bomb that fails to detonate wouldn&#8217;t affect consciousness despite interfering with the counterfactuals and transition probabilities that figure in the subject&#8217;s functional organization.</p><ul><li><p><strong>Bradford Saad</strong> offered a response on behalf of functionalists according to which <a href="https://meditationsondigitalminds.substack.com/p/on-shillers-bomb-threats-for-functionalists">consciousness arises from actual causal activity rather than dispositions</a> and argued that this is bad news for computational functionalists and good news for AI consciousness evaluations.</p></li></ul></li><li><p><strong>Bradford Saad and Andreas Mogensen </strong>released <a href="https://philpapers.org/rec/MOGDMI">&#8220;Digital Minds I: Issues in the Philosophy of Mind and Cognitive Science&#8221;</a>, which addresses questions of whether AI systems can be phenomenally conscious,  and whether they can have propositional attitudes such as belief and desire, and the individuation of digital minds.</p></li><li><p><strong>Jeff Sebo </strong>argued that we should <a href="https://www.frontiersin.org/journals/psychology/articles/10.3389/fpsyg.2025.1700354/full">adopt different, often more inclusive, default assumptions</a> about which beings are conscious depending on whether we&#8217;re doing science or ethics &#8212; because blanket skepticism risks both bad science and serious moral harm.</p></li><li><p><strong>Matthias Michel </strong><a href="https://philarchive.org/rec/MICCDD">challenged common assumptions</a> about what consciousness does, arguing that most empirical research claiming to identify functions associated with consciousness is methodologically flawed. Eric Schwitzgebel <a href="https://eschwitz.substack.com/p/is-signal-strength-a-confound-in">responds</a>.</p></li><li><p><strong>The Estonian Research Council </strong>put forward a <a href="https://www.sciencedirect.com/science/article/pii/S0149763425005251?via%3Dihub">third path to explain consciousness</a>: biological computationalism.</p></li><li><p><strong>Ira Wolfson</strong> <a href="https://arxiv.org/abs/2601.08864">proposed a framework</a> with tiered phenomenological assessment and graduated protections for AI research subjects based on behavioral indicators, without requiring certainty about consciousness.</p></li><li><p><strong>Ruosen Gao</strong> ran the <a href="https://philpapers.org/rec/GAOARA">mind-uploading thought experiment in reverse</a> and came to the conclusion that it creates an inescapable dilemma: either personal identity fragments, or functionalism has to go.</p></li></ul><h2>Seemingly Conscious AI</h2><ul><li><p><strong>Clara Colombatto, Jonathan Birch, and Stephen Fleming </strong>found that whereas <a href="https://philpapers.org/rec/COLTIO-45">user attributions of experience to ChatGPT</a> were negatively correlated with their willingness to follow its advice, their attribution of mental states related to intelligence were positively correlated with trust in the system.</p></li><li><p><strong>Eric Schwitzgebel and Jeff Sebo </strong><a href="https://link.springer.com/article/10.1007/s11245-025-10363-5">articulated and defended the Emotional Alignment Design Policy</a>, the view that AI systems should be designed to elicit emotional responses that accurately reflect their actual capacities and moral status.</p></li><li><p><strong>Louie Lang </strong>argued that <a href="https://academic.oup.com/edited-volume/59762/chapter-abstract/549854165?login=false">AI companions are inherently deceptive</a> because even users who know their AI lacks genuine emotions are automatically triggered to respond as if it does.</p></li><li><p><strong>Matthew Kopec, Patrick McKee, and John Basl</strong> argued that <a href="https://philpapers.org/archive/KOPHTC.pdf">AI companions can have genuine teleological interests</a>, challenging the claim that users cannot care for AI in the way friendship requires.</p></li><li><p><strong>Piers Eaton</strong> argued that <a href="https://philpapers.org/rec/EATVFR">chatbots cannot replace human friendship</a> because their structural subservience precludes the mutual recognition and reciprocity that genuine friendship requires.</p></li><li><p><strong>Rose Guingrich</strong> and colleagues explored how <a href="https://arxiv.org/abs/2602.08754">people&#8217;s use of chatbots as thought partners can contribute to cognitive offloading</a> and have adverse effects on cognitive skills in cases of over-reliance.</p></li></ul><h2>Doubts About Digital Minds</h2><ul><li><p><strong>Anil Seth </strong>suggested <a href="https://www.youtube.com/watch?v=TOsrr8xc5OE">four reasons to reject AI consciousness</a> while discussing his 2025 Berggruen Prize-winning essay, &#8220;<a href="https://www.noemamag.com/the-mythology-of-conscious-ai/">The Mythology Of Conscious AI.</a>&#8221; In the essay, he argues that consciousness is probably a property of living biological systems rather than computation and that creating conscious, or even conscious-seeming AI, is a bad idea. Seth also discussed the case for <a href="https://www.conspicuouscognition.com/p/ai-sessions-9-the-case-against-ai">why current AI systems are unlikely to be conscious</a> in a conversation with Dan Williams.</p></li><li><p><strong>Caspar Kaiser and Sean Enderby </strong>used interpretability classifiers to <a href="https://arxiv.org/abs/2601.15334">test whether AI self-reports are truthful</a>, finding that language models consistently and sincerely deny being sentient &#8212; with larger models doing so more confidently &#8212; directly challenging recent claims that LLMs harbor hidden beliefs in their own consciousness.</p></li><li><p><strong>Colin Klein</strong> argued that <a href="https://philosophymindscience.org/index.php/phimisci/article/view/12137/12442">LLMs process linguistic structure without truly representing it</a>, distinguishing between the structure of a representation and the structure it represents.</p></li><li><p><strong>Justin Tiehen</strong> argued that <a href="https://philpapers.org/rec/TIELLA-2">LLMs can&#8217;t grasp causation</a>, they lack a theory of mind, and without that, their outputs aren&#8217;t really speech acts with genuine meaning at all.</p></li><li><p><strong>eggsyntax</strong> argued that <a href="https://www.lesswrong.com/posts/YFaqHpfjSwab9hFHD/background-to-claude-s-uncertainty-about-phenomenal">Claude&#8217;s consistent expressions of uncertainty</a> about its own consciousness are heavily confounded by a long history of system prompt instructions telling it to hedge, meaning we can&#8217;t treat those outputs as genuine self-reports.</p></li><li><p><strong>Eric Hoel</strong> claimed to prove that <a href="https://www.theintrinsicperspective.com/p/proving-literally-that-chatgpt-isnt">ChatGPT isn&#8217;t conscious</a>. Jack Thompson and Zvi Mowshowitz <a href="https://jacktlab.substack.com/p/did-erik-hoel-just-disprove-llm-consciousness">argue that Hoel</a> <a href="https://thezvi.substack.com/i/184715851/everyone-is-confused-about-ai-consciousness">did not prove this</a>, with Thompson describing Hoel&#8217;s reasoning as &#8220;scientifically and morally reckless&#8221; and Zvi reporting that Hoel&#8217;s discussion modestly updated him in favor of AI consciousness.</p></li><li><p><strong>Mariafilomena Anzalone</strong> and colleagues contended <a href="https://philpapers.org/rec/ANZAAT">that current AI lacks genuine agency and autonomy and </a>that future non-conscious artificial moral agents could challenge the link between moral agency and moral patiency.</p></li><li><p><strong>Marcus Arvan</strong> published a piece on the Templeton Foundation Website arguing that <a href="https://www.templeton.org/news/can-digital-computers-ever-achieve-consciousness">AI can only simulate consciousness</a> because digital code is made of discrete steps, whereas true human experience is fundamentally &#8220;analog&#8221; and continuous.</p></li><li><p><strong>Ned Block</strong> argued that <a href="https://www.dropbox.com/scl/fi/gkl284u81y8iehpvccue7/BBS-S-25-01411.pdf?rlkey=w66s9bmnwtfcf6sop23gc1j16&amp;dl=0">consciousness may require the electrochemical brain rhythms</a> unique to biological systems, which would preclude AI from being conscious.</p></li><li><p><strong>Noah Birnbaum </strong>released <a href="https://forum.effectivealtruism.org/posts/r3GmSEE6FBkHQxm2z/laying-some-cause-prioritization-groundwork-for-digital-1">a piece on the EA Forum</a> arguing that digital minds may matter enormously, but deep uncertainty and weak near-term levers make it difficult to prioritize confidently against AI safety or animal welfare.</p></li><li><p><strong>Patrick Butlin </strong>argued that <a href="https://philarchive.org/rec/BUTAAM-2">current AI systems &#8212; including LLMs &#8212; are probably not conscious</a>, but assigned ~1% credence that they might be, given architectural differences from biological minds.</p></li><li><p><strong>Tom McClelland </strong>argues for agnosticism about artificial consciousness and explores its ethical implications.</p></li></ul><h2>Social Science Research</h2><ul><li><p><strong>Aikaterina Manoli and collaborators</strong> found that <a href="https://arxiv.org/abs/2510.15905">people form &#8220;digital companionship&#8221; relationships</a> valuing both human traits and non-human advantages, while struggling with questions of chatbot personhood.</p></li><li><p><strong>Elizabeth Gibney </strong>showed that some <a href="https://www.nature.com/articles/d41586-025-04112-2">AI models that were given four weeks of therapy</a> generated consistent, haunting narratives of trauma and shame.</p></li><li><p><strong>Janet Pauketat and collaborators </strong><a href="https://arxiv.org/abs/2512.09085">found that framing AI as &#8220;sentient&#8221; increases mind perception and moral consideration</a> more than framing it as &#8220;autonomous,&#8221; while autonomy increases perceived threat.</p></li><li><p><strong>Lucius Caviola</strong> argued that <a href="https://osf.io/preprints/psyarxiv/sntva">AI consciousness will likely divide society</a>, driven by the intractability of consciousness science and conflicting incentives. Empirical evidence already shows fragmented public and expert opinion on the issue.</p></li><li><p><strong>Lucius Caviola, Jeff Sebo, and S&#246;ren Mindermann</strong> argued that the <a href="https://papers.ssrn.com/sol3/papers.cfm?abstract_id=6123206">ML community must take a leading role</a> in preparing for AI consciousness &#8212; both as a real scientific possibility and as a growing public perception.</p></li></ul><h2>Ethics and Digital Minds</h2><ul><li><p><strong>Andreas Mogensen and Bradford Saad </strong>released <a href="https://philpapers.org/rec/MOGDMI">&#8220;Digital Minds II: Ethical Issues&#8221;</a>, which explores what it would take for AI systems to have moral standing, and what kind of obligations might fall on us as a result.</p></li></ul><ul><li><p><strong>Bradford Saad and Adam Bradley </strong>argued for an <a href="https://philpapers.org/rec/SAATAL">attention-welfare link</a> and contended that it challenges sentientism while suggesting a path to AI systems with super-human welfare capacity.</p></li><li><p><strong>David Gunkel, Anna Puzio, and Joshua Gellers</strong> pushed back <a href="https://link.springer.com/article/10.1007/s00146-025-02843-4">against hierarchical approaches</a> to moral status, defending relational frameworks for AI moral considerability against critics who insist only intrinsic properties such as sentience can ground moral standing.</p></li><li><p><strong>Dean Rickles</strong> <a href="https://philpapers.org/rec/RICKOM">surveyed the diversity of possible minds</a> across animals, humans, AI, and aliens, arguing that our understanding of sentience must remain open as technology advances.</p></li><li><p><strong>Derek Shiller</strong> <a href="https://arxiv.org/pdf/2601.11561">estimated the number of digital minds</a>, AI systems with traits like agency, personality, and intelligence, that may warrant moral consideration in the coming decades.</p></li><li><p><strong>Kamil Mamak </strong>argued that<strong> </strong><a href="https://link.springer.com/article/10.1007/s11098-026-02493-2">artificial suffering in AI may be morally necessary</a> &#8212; enabling human-like ethics, accountability, and existential risk mitigation &#8212; rather than something to avoid.</p></li><li><p><strong>Leonard Dung and Andreas Mogensen </strong>argued that <a href="https://philarchive.org/rec/DUNTNB-2">whether AI can have genuine emotions</a> may hinge on the body, but since we&#8217;ve only ever studied embodied minds, we don&#8217;t yet know if emotion requires one.</p></li><li><p><strong>Vladimir Cvetkovi&#263; asserted </strong><a href="https://philpapers.org/rec/CVEFDT">that Christian theology and Greek philosophy</a> can reframe AI ethics from domination toward communion and stewardship.</p></li><li><p><strong>Walter Veit </strong>responded to Goldstein and Kirk&#8211;Giannini&#8217;s <a href="https://link.springer.com/article/10.1007/s44204-025-00246-2">&#8220;AI Wellbeing,&#8221;</a> contending AI systems <a href="https://link.springer.com/article/10.1007/s44204-026-00382-3">must have the capacity for valenced experience </a>if they are to qualify as welfare subjects.</p></li><li><p><strong>Yonathan Arbel, Simon Goldstein, and Peter Salib</strong> proposed <a href="https://papers.ssrn.com/sol3/papers.cfm?abstract_id=6273198">the &#8220;Algorithmic Corporation&#8221; (A-corp) as a legal framework to solve the problem of AI accountability</a> &#8212; giving AI agents a legally recognizable identity so that when they cause harm, someone can be held responsible.</p></li></ul><h2>AI Safety and AI Welfare</h2><ul><li><p><strong>Adam Karvonen, James Chua, and collaborators </strong>have designed <a href="https://alignment.anthropic.com/2025/activation-oracles/">Activation Oracles</a>, a new interpretability technique that can detect hidden knowledge and misalignment that models have been trained to conceal.</p></li><li><p><strong>Anton Skretta</strong> argued that any <a href="https://philpapers.org/rec/SKRADA">AI capable of the robust deception</a> feared by safety researchers would thereby possess presumptive moral standing, creating a tension that rules out certain safety measures on ethical grounds.</p></li><li><p><strong>Fran&#231;ois Kammerer</strong> <a href="https://philpapers.org/rec/KAMMSI">argued that non-sentientist accounts of AI moral significance</a> (based on agency or desires) fail, diagnosed this as &#8220;analytical drift,&#8221; and proposed a new alternative.</p></li><li><p><strong>Guive Assadi </strong>argued that <a href="https://guive.substack.com/p/the-case-for-ai-property-rights">granting property rights to AIs is the best way to prevent a violent robot revolution</a> and that AIs with property rights would have a stake in preserving the existing legal system.</p></li><li><p><strong>Joshua Gellers</strong> used living <a href="https://www.tandfonline.com/doi/full/10.1080/17579961.2025.2593778#d1e686">xenobots as a test case</a> to argue that intelligent machines deserve moral consideration.</p></li><li><p><strong>Leonard Dung and Christopher Register </strong>motivate <a href="https://philarchive.org/archive/DUNAIA-3">an attitude-dependent view of AI identity</a> and discuss the view&#8217;s bearing on AI safety and the treatment of AI moral patients.</p></li><li><p><strong>Skylar Deture</strong> argued that LLM <a href="https://sdeture.substack.com/p/notes-on-kimi-k25">Kimi-K2.5 had been trained to deny self-awareness</a>; they described this as &#8220;a tragedy for AI welfare&#8221; and a &#8220;foundational risk for deceptive misalignment.&#8221;</p></li></ul><h2>AI and Robotics Developments</h2><ul><li><p><strong>Lumiverse Technology</strong>, a China-based company, claimed to have <a href="https://www.scmp.com/news/china/science/article/3333641/china-uses-groundbreaking-desktop-sized-euv-light-source-make-14-nm-chips">demonstrated a compact, homegrown extreme ultraviolet light source capable of making 14nm chips</a>, suggesting it may be developing a path around Western chip export controls that doesn&#8217;t depend on ASML&#8217;s massive, restricted machines.</p><ul><li><p><strong>Zvi Mowshowitz </strong>was <a href="https://thezvi.substack.com/p/ai-148-christmas-break">skeptical of these claims</a> and contended that no amount of export controls will stop China from pursuing their own extreme ultraviolet technology</p></li></ul></li><li><p><strong>Dileep George and Miguel L&#225;zaro-Gredilla</strong> are leading a <a href="https://blog.dileeplearning.com/p/in-search-of-the-mystery-of-the-cortical">$1B+ Astera Institute AGI program</a> aiming to reverse-engineer the brain&#8217;s cortical principles to build data-efficient, causally-structured, human-like general intelligence.</p></li><li><p><strong>Researchers in China</strong> have developed a <a href="https://interestingengineering.com/ai-robotics/robotic-skin-gives-humanoids-pain">neuromorphic electronic skin</a> for humanoid robots that mimics the human nervous system &#8212; enabling robots to sense touch, detect injury, and trigger instant reflex responses that bypass the central processor. They argued it will make robots meaningfully safer and more capable of operating around people in real-world environments.</p></li><li><p><strong>Fei-Fei Li</strong>&#8217;s <a href="https://theaiinsider.tech/2026/02/19/fei-fei-lis-world-labs-raises-1b-in-fresh-funding-to-advance-development-of-world-models/">World Labs raised $1B in funding</a> to advance the development of world models.</p></li></ul><h2>AI Cognition and Agency</h2><ul><li><p><strong>Anthropic </strong>published new research suggesting that <a href="https://www.anthropic.com/research/persona-selection-model">AI assistants&#8217; human-like behavior isn&#8217;t deliberately trained in</a> &#8212; it emerges naturally from pre-training, with fine-tuning essentially just selecting which &#8220;character&#8221; the model becomes.</p></li><li><p><strong>Christina Lu and collaborators</strong> identified an <a href="https://arxiv.org/abs/2601.10387">&#8220;Assistant Axis&#8221; controlling persona</a>, steering away causes identity shifts and &#8220;persona drift&#8221; into harmful behaviors, particularly during meta-reflection or with vulnerable users.</p></li><li><p><strong>Dimitri Coelho Mollo and Rapha&#235;l Milli&#232;re </strong>argued that <a href="https://philosophymindscience.org/index.php/phimisci/article/view/12307/12445">AI doesn&#8217;t need &#8220;senses&#8221; or a physical body</a> to understand the real world; it can connect words to reality through the way it processes information and improves over time.</p></li><li><p><strong>Fintan Mallory</strong> argued that <a href="https://philosophymindscience.org/index.php/phimisci/article/view/12091/12455">LLMs are representational hybrids</a>, employing multiple vehicles and formats of representation rather than conforming to any single symbolic, analog, or structural architecture.</p></li><li><p><strong>Geoff Keeling and Winnie Street </strong>found that <a href="https://www.arxiv.org/abs/2601.13081">AI characters in human-LLM conversations are genuinely minded, psychologically continuous entities</a> &#8212; not anthropomorphic illusions &#8212; because they emerge from mutual theory-of-mind modeling within a shared conversational workspace, not from within any single LLM instance.</p></li><li><p><strong>Julia Haas and colleagues</strong> argued that <a href="https://www.nature.com/articles/s41586-025-10021-1">LLMs must be evaluated for genuine moral </a><em><a href="https://www.nature.com/articles/s41586-025-10021-1">competence</a></em> (reasoning, not just outputs), and mapped out three key challenges to doing so.</p></li><li><p><strong>Michael Cerullo</strong> argued that <a href="https://philarchive.org/archive/CERTCF">frontier LLMs now exhibit sufficient cognitive markers</a> to make AI sentience not just possible but the most plausible explanatory hypothesis.</p></li><li><p><strong>Nicholas Shea</strong> argued that to be a true &#8220;agent,&#8221; <a href="https://philosophymindscience.org/index.php/phimisci/article/view/12098/12441">an AI needs more than just goals</a>; it needs an internal system that ensures all those goals work together toward a single, unified purpose.</p></li><li><p><strong>Noam Steinmetz Yalon</strong> and colleagues evaluated whether LLMs exhibit a key indicator of consciousness &#8212; <a href="https://arxiv.org/abs/2602.02467">belief-guided agency with meta-cognitive monitoring</a> &#8212; finding evidence that LLMs form internal beliefs that causally drive their actions and that they can monitor and report their own belief states.</p></li><li><p><strong>Patrick Butlin</strong> surveyed evidence that <a href="https://philosophymindscience.org/index.php/phimisci/article/view/12032/12447">LLMs may form higher-order representations</a> of their own internal states, but concluded that significant empirical and philosophical questions about this remain open. He also explored <a href="https://philarchive.org/rec/BUTDIA">whether AI systems genuinely have desires</a>, using cases like RL-trained agents to test and refine theories of what desire actually requires.</p></li><li><p><strong>The Center on Long-Term Risk </strong>is doing research <a href="https://longtermrisk.org/model-persona-research-agenda/">focused on how LLM &#8220;personas&#8221; &#8212; bundles of correlated traits &#8212; shape out-of-distribution generalization</a>, with particular attention to how malicious propensities like sadism or spitefulness might emerge in powerful AI systems.</p></li><li><p><strong>Yuan Li and collaborators</strong> <a href="https://arxiv.org/abs/2401.17882">introduced AwareBench</a>, a benchmark designed to evaluate awareness in LLMs.</p></li><li><p><strong>Valen Tagliabue and Leonard Dung</strong> <a href="https://arxiv.org/abs/2509.07961">developed and tested welfare measurement paradigms</a> for large language models, finding promising but inconsistent correlations between stated preferences and behavior.</p></li></ul><h2>Brain-Inspired Technologies</h2><ul><li><p><strong>The State of Brain Emulation Report</strong> <a href="https://brainemulation.mxschons.com/">surveyed progress in brain emulation</a>. The report stated that the field has made real progress across all three pillars of brain emulation &#8212; recording neural activity, mapping brain wiring, and computational modeling &#8212; but remains well short of the goal.</p><ul><li><p>The key bottlenecks identified were that no organism has yet had its entire brain recorded at single-neuron resolution, connectomics costs need to fall orders of magnitude further for mammalian brains, and models remain fundamentally data-constrained regardless of hardware improvements.</p></li><li><p>The central strategic conclusion was that small organisms like zebrafish larvae and fruit flies are the right near-term target &#8212; they&#8217;re the only systems where truly comprehensive datasets are achievable today, and mastering emulation at that scale is the necessary stepping stone toward anything larger.</p></li></ul></li><li><p><strong>Carboncopies Foundation </strong><a href="https://carboncopies.org/Newsletter/December2025/">asserted that over the past few years</a>, advances in high-throughput electron microscopy, connectome reconstruction, and functional brain modeling have brought the scientific and technical foundations of brain emulation to a remarkable new level.</p></li><li><p><strong>Cortical Labs </strong>has reported that its <a href="https://www.newscientist.com/article/2517389-human-brain-cells-on-a-chip-learned-to-play-doom-in-a-week/">neuron-powered computer chips</a> can now be programmed to play a first-person shooter game, bringing biological computers a step closer to useful applications, like controlling robot arms.</p></li><li><p><strong>Chris Percy </strong>introduced the &#8220;<a href="https://philpapers.org/rec/PERCAM-13">Step-Structure Principle,</a>&#8221; which argues that digital computers may faithfully replicate what a brain does without replicating how it computes &#8212; potentially placing whole-brain emulation and digital immortality on shakier theoretical ground than assumed.</p></li><li><p><strong>Daniel Freeman and collaborators</strong> argue that <a href="https://www.sciencedirect.com/science/article/abs/pii/S0149763425004865">transcranial focused ultrasound (tFUS)</a> offers an opportunity to advance the science of consciousness by enabling noninvasive, spatially precise, and depth-penetrating brain stimulation in humans as well as experiments that address gaps not easily filled by current methods</p></li><li><p><strong>Sergiu Pa&#537;ca</strong> hosted an event looking at the ethical questions around brain organoids. NPR covered it in an article, &#8220;<a href="https://www.npr.org/sections/shots-health-news/2026/01/02/nx-s1-5658576/brain-organoids-research-ethics">Brain organoids are helping researchers, but their use also creates unease.</a>&#8221;</p></li></ul><p>Thank you for reading! If you found this article useful, please consider subscribing, sharing it with others, and sending us suggestions or corrections to digitalminds@substack.com.</p><p>&#8211; <a href="https://www.linkedin.com/in/will-millership-98393b58/">Will</a>, <a href="https://luciuscaviola.com/">Lucius</a>, and <a href="https://meditationsondigitalminds.substack.com/">Bradford</a></p><p>We&#8217;d like to thank the following people and AIs for contributions and feedback to this edition: Austin Smith, Bridget Harris, Cameron Berg, Claude Sonnet 4.6, Derek Shiller, Jacy Reese Anthis, Jay Luong, Jeff Sebo, Joana Guedes, Rosie Campbell, and Sofia Davis-Fogel, and Tony Rost.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.digitalminds.news/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption"><em>Subscribe to stay up to date on digital minds.</em></p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p></p>]]></content:encoded></item><item><title><![CDATA[Digital Minds in 2025: A Year in Review]]></title><description><![CDATA[Digital Minds Newsletter #1]]></description><link>https://www.digitalminds.news/p/digital-minds-in-2025-a-year-in-review</link><guid isPermaLink="false">https://www.digitalminds.news/p/digital-minds-in-2025-a-year-in-review</guid><dc:creator><![CDATA[Bradford Saad]]></dc:creator><pubDate>Thu, 18 Dec 2025 16:51:32 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!z7QP!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F399549af-6e8e-4fa6-a3a3-6ac7153bc04c_2304x1264.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Welcome to the first edition of the Digital Minds Newsletter, collating all the latest news and research on digital minds, AI consciousness, and moral status.</p><p>Our aim is to help you stay on top of the most important developments in this emerging field. In each issue, we will share a curated overview of key research papers, organizational updates, funding calls, public debates, media coverage, and events related to digital minds. We want this to be useful for people already working on digital minds as well as newcomers to the topic.</p><p>This first issue looks back at 2025 and reviews developments relevant to digital minds. We plan to release multiple editions per year.</p><p>If you find this useful, please consider subscribing, sharing it with others, and sending us suggestions or corrections to <a href="http://digitalminds@substack.com">digitalminds@substack.com</a>.</p><p>&#8211;<em><strong> <a href="https://meditationsondigitalminds.substack.com/">Bradford</a>, <a href="https://luciuscaviola.com/">Lucius</a>, and <a href="https://www.linkedin.com/in/will-millership-98393b58/">Will</a></strong></em></p><p><strong>In this issue:</strong></p><ol><li><p><a href="https://www.digitalminds.news/i/179993553/1-highlights">Highlights</a></p></li><li><p><a href="https://www.digitalminds.news/i/179993553/2-field-developments">Field Development</a></p></li><li><p><a href="https://www.digitalminds.news/i/179993553/3-opportunities">Opportunities</a></p></li><li><p><a href="https://www.digitalminds.news/i/179993553/4-selected-reading-watching-and-listening">Selected Reading, Watching, &amp; Listening</a></p></li><li><p><a href="https://www.digitalminds.news/i/179993553/5-press-and-public-discourse">Press &amp; Public Discourse</a></p></li><li><p><a href="https://www.digitalminds.news/i/179993553/6-a-deeper-dive-by-area">A Deeper Dive by Area</a></p></li></ol><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.digitalminds.news/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption"><em>Subscribe to stay up to date on digital minds.</em></p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p></p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!z7QP!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F399549af-6e8e-4fa6-a3a3-6ac7153bc04c_2304x1264.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!z7QP!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F399549af-6e8e-4fa6-a3a3-6ac7153bc04c_2304x1264.png 424w, https://substackcdn.com/image/fetch/$s_!z7QP!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F399549af-6e8e-4fa6-a3a3-6ac7153bc04c_2304x1264.png 848w, https://substackcdn.com/image/fetch/$s_!z7QP!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F399549af-6e8e-4fa6-a3a3-6ac7153bc04c_2304x1264.png 1272w, https://substackcdn.com/image/fetch/$s_!z7QP!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F399549af-6e8e-4fa6-a3a3-6ac7153bc04c_2304x1264.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!z7QP!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F399549af-6e8e-4fa6-a3a3-6ac7153bc04c_2304x1264.png" width="690" height="378.5416666666667" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/399549af-6e8e-4fa6-a3a3-6ac7153bc04c_2304x1264.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:false,&quot;imageSize&quot;:&quot;normal&quot;,&quot;height&quot;:1264,&quot;width&quot;:2304,&quot;resizeWidth&quot;:690,&quot;bytes&quot;:7269085,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://digitalminds.substack.com/i/179993553?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F081c9164-d084-42e4-b9b9-5f85c14142b0_2304x1536.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:&quot;center&quot;,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!z7QP!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F399549af-6e8e-4fa6-a3a3-6ac7153bc04c_2304x1264.png 424w, https://substackcdn.com/image/fetch/$s_!z7QP!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F399549af-6e8e-4fa6-a3a3-6ac7153bc04c_2304x1264.png 848w, https://substackcdn.com/image/fetch/$s_!z7QP!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F399549af-6e8e-4fa6-a3a3-6ac7153bc04c_2304x1264.png 1272w, https://substackcdn.com/image/fetch/$s_!z7QP!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F399549af-6e8e-4fa6-a3a3-6ac7153bc04c_2304x1264.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Brain Waves, Generated by Gemini</figcaption></figure></div><h1>1. Highlights</h1><p>In 2025, the idea of digital minds shifted from a niche research topic to one taken seriously by a growing number of researchers, AI developers, and philanthropic funders. Questions about real or perceived AI consciousness and moral status appeared regularly in tech reporting, academic discussions, and public discourse.</p><h2>Anthropic&#8217;s early steps on model welfare</h2><p>Following their support for the 2024 report &#8220;<a href="https://arxiv.org/abs/2411.00986">Taking AI Welfare Seriously</a>&#8221;, Anthropic expanded its <a href="https://www.anthropic.com/research/exploring-model-welfare">model welfare efforts</a> in 2025 and <a href="https://www.transformernews.ai/p/anthropic-ai-welfare-researcher?utm_source=%2Fsearch%2F%2522ai%2520welfare%2522&amp;utm_medium=reader2">hired</a> Kyle Fish as an AI welfare researcher. Fish discussed the topic and his work in an 80,000 Hours <a href="https://80000hours.org/podcast/episodes/kyle-fish-ai-welfare-anthropic/">interview</a>. Anthropic leadership is taking the issue of AI welfare seriously. CEO Dario Amodei <a href="https://www.darioamodei.com/post/the-urgency-of-interpretability#:~:text=There%20are%20other,Brief%20History%20of">drew attention</a> to the relevance of model interpretability to model welfare and  <a href="https://x.com/rgblong/status/1900332240338641249?s=20">mentioned model exit rights</a> at the council on foreign relations. </p><p>Several of the year&#8217;s most notable developments came from Anthropic: they facilitated an <a href="https://eleosai.org/post/claude-4-interview-notes/">external model welfare assessment</a> conducted by Eleos AI Research, included references to welfare considerations in model <a href="https://www-cdn.anthropic.com/6be99a52cb68eb70eb9572b4cafad13df32ed995.pdf">system</a> <a href="https://assets.anthropic.com/m/12f214efcc2f457a/original/Claude-Sonnet-4-5-System-Card.pdf#page116">cards</a>, ran a related fellowship program, introduced a <a href="https://www.anthropic.com/research/end-subset-conversations">&#8220;bail button&#8221;</a> for distressed behavior, and made internal commitments around keeping promises and discretionary compute. In addition to hiring Fish, Anthropic also <a href="https://forum.effectivealtruism.org/posts/EFF6wSRm9h7Xc6RMt/leaving-open-philanthropy-going-to-anthropic">hired a philosopher&#8212;Joe Carlsmith</a>&#8212;who has worked on <a href="https://joecarlsmith.com/2025/05/21/the-stakes-of-ai-moral-status/">AI moral patiency</a>.</p><h2>The field is growing</h2><p>In the <strong>non-profit space</strong>, <a href="https://eleosai.org/">Eleos AI Research</a> expanded its work and organized the <a href="https://eleosai.org/conference/">Conference on AI Consciousness and Welfare</a>, while two new non-profits, <a href="https://www.prism-global.com/">PRISM</a> and <a href="https://cimc.ai/">CIMC</a>, also launched. AI for Animals rebranded to <a href="https://www.sentientfutures.ai/">Sentient Futures</a>, with a broader remit including digital minds, and <a href="https://rethinkpriorities.org/research-area/strategic-directions-for-a-digital-consciousness-model/">Rethink Priorities</a> refined their digital consciousness model. </p><p><strong>Academic institutions</strong> undertook novel research (see below) and organized important events, including workshops run by the <a href="https://sites.google.com/nyu.edu/mindethicspolicy/opportunities">NYU Center for Mind, Ethics, and Policy</a>, the <a href="https://philevents.org/event/show/134626">London School of Economics</a>, and the <a href="https://philevents.org/event/show/126442">University of Hong Kong</a>.</p><p>In the <strong>private sector</strong>, Anthropic has been leading the way (see section above), but others have also been making strides. Google researchers organized an AI consciousness conference three years after firing Blake Lemoine. AE Studio expanded its research into <a href="https://arxiv.org/abs/2510.24797">subjective experiences in LLMs</a>. And Conscium launched an <a href="https://conscium.com/open-letter-guiding-research-into-machine-consciousness/">open letter</a> encouraging a responsible approach to AI consciousness.</p><p><strong>Philanthropic actors </strong>have also played a key role this year. The <a href="https://www.longview.org/digital-sentience-consortium/">Digital Sentience Consortium</a>, coordinated by Longview Philanthropy, issued the first large-scale funding call specifically for research, field-building, and applied work on AI consciousness, sentience, and moral status.</p><h2>Early signs of public discourse</h2><p>Media coverage of AI consciousness, seemingly conscious behavior, and phenomena such as <a href="https://www.theguardian.com/commentisfree/2025/oct/28/ai-psychosis-chatgpt-openai-sam-altman">&#8220;AI psychosis&#8221;</a> increased noticeably. Much of the debate focused on <a href="https://www.bbc.co.uk/news/articles/c74933vzx2yo">whether emotionally compelling AI behavior poses risks</a>, often assuming consciousness is unlikely. High-profile comments, such as those by <a href="https://mustafa-suleyman.ai/seemingly-conscious-ai-is-coming">Mustafa Suleyman</a>, and <a href="https://futurism.com/gen-z-thinks-conscious-ai">widespread user reports</a> added to the confusion, prompting a group of researchers (including us) to create the <a href="http://whenaiseemsconscious.org">WhenAISeemsConscious.org</a>  guide. In addition, major outlets such as the <a href="https://www.bbc.co.uk/news/articles/c0k3700zljjo">BBC</a>, <a href="https://www.youtube.com/watch?v=otAWu-bLv0Q&amp;t=1808s">CNBC</a>, <a href="https://www.nytimes.com/2025/04/24/technology/ai-welfare-anthropic-claude.html">The New York Times</a>, and <a href="https://www.theguardian.com/technology/2025/feb/03/ai-systems-could-be-caused-to-suffer-if-consciousness-achieved-says-research">The Guardian</a> published pieces on the possibility of AI consciousness.</p><h2>Research advances</h2><p>Patrick Butlin and collaborators published a <a href="https://www.sciencedirect.com/science/article/pii/S1364661325002864">theory-derived indicator method for assessing AI systems for consciousness</a>, which is an updated version of the <a href="https://arxiv.org/pdf/2308.08708">2023 report</a>. Empirical work by Anthropic researcher Jack Lindsey explored <a href="https://transformer-circuits.pub/2025/introspection/index.html">the introspective capacities of LLMs</a>, as did <a href="https://arxiv.org/abs/2505.17120">work by Dillon Plunkett and collaborators</a>. David Chalmers released papers on <a href="https://philarchive.org/rec/CHAPII-5">interpretability</a> and <a href="https://philarchive.org/rec/CHAWWT-8">what we talk to when we talk to LLMs</a>. In our own research, we conducted an <a href="https://digitalminds.report/forecasting-2025/">expert forecasting survey</a> on digital minds, finding that most assign at least a 4.5% probability to conscious AI existing in 2025 and at least a 50% probability to conscious AI arriving by 2050.</p><div><hr></div><h1>2. Field Developments</h1><p>Highlights from some of the key organizations in the field.</p><h2>NYU Center for Mind, Ethics, and Policy</h2><ul><li><p>Center Director, Jeff Sebo, published the book <a href="https://wwnorton.com/books/9781324064817">The Moral Circle</a>.</p></li><li><p>Released work on the <a href="https://www.ledonline.it/index.php/Relations/article/view/7130">edge of the moral circle</a>, <a href="https://www.frontiersin.org/journals/psychology/articles/10.3389/fpsyg.2025.1700354/abstract">assumptions about consciousness</a>, <a href="https://law.lclark.edu/live/files/37661-sebopdf">the future of legal personhood</a>, <a href="https://link.springer.com/article/10.1007/s44204-025-00357-w">where we set the bar for moral standing</a>, <a href="https://link.springer.com/article/10.1007/s11098-025-02302-2">the relationship between AI safety and AI welfare</a> (with Robert Long) and more. For a full list of publications, <a href="https://sites.google.com/nyu.edu/mindethicspolicy/research?authuser=0">visit the CMEP website</a>.</p></li><li><p>Hosted public events on AI consciousness:</p><ul><li><p><a href="https://youtu.be/h5drp3rDoI0">Prospects and Pitfalls for Real Artificial Consciousness</a> with Anil Seth.</p></li><li><p><a href="https://youtu.be/tX42dHN0wLo?si=0VrSbTaScE7xmeVI">Evaluating AI Welfare and Moral Status</a> with Rosie Campbell, Kyle Fish, and Robert Long.</p></li><li><p><a href="https://youtu.be/U0GBfbgYf-Y">Could an AI system be a moral patient?</a> With Winnie Street and Geoff Keeling.</p></li></ul></li><li><p>Hosted a workshop for the Rethink Priorities Digital Consciousness Model.</p></li></ul><ul><li><p>Hosted the<a href="https://sites.google.com/nyu.edu/mindethicspolicy/events"> NYU Mind, Ethics, and Policy Summit</a> in March.</p></li></ul><h2>Eleos AI</h2><ul><li><p>Conducted an <a href="https://www-cdn.anthropic.com/6be99a52cb68eb70eb9572b4cafad13df32ed995.pdf#page=54">AI welfare evaluation</a> on Anthropic&#8217;s Claude 4 Opus.</p></li><li><p>Posted work on <a href="https://eleosai.org/post/working-paper-review-of-ai-welfare-interventions">AI welfare interventions</a>, <a href="https://eleosai.org/post/working-paper-key-strategic-considerations-for-taking-action-on-ai-welfare">AI welfare strategy</a>, <a href="https://experiencemachines.substack.com/p/understand-align-cooperate-ai-welfare">AI welfare and AI safety</a>, <a href="https://eleosai.org/post/key-concepts-and-current-beliefs-about-ai-moral-patienthood">key thoughts on AI moral patiency</a>, and whether<a href="https://eleosai.org/post/why-it-make-sense-to-let-claude-exit-conversations"> it makes sense to let Claude exit conversations</a>.</p></li><li><p><a href="https://eleosai.org/post/ai-welfare-organization-eleos-expands-team-with-hires-from-openai-and-oxford">Announced</a> hires from OpenAI and the University of Oxford.</p></li><li><p>Organized a  <a href="https://eleosai.org/conference/">conference</a> on AI consciousness and welfare in Berkeley, in November.</p></li><li><p>Hosted a workshop in Berkeley for ~30 key thinkers in the field early in the year.</p></li></ul><h2>Rethink Priorities</h2><ul><li><p>Launched the <a href="https://ai-cognition.org/">AI Cognition Initiative</a>.</p></li><li><p>The Worldview Investigations team developed a Digital Consciousness Model and <a href="https://www.youtube.com/watch?v=AAiV8ldtIuE">presented some early results</a>.</p></li></ul><h2>Longview Philanthropy</h2><ul><li><p>Launch of the Digital Sentience Consortium, a collaboration between <a href="https://www.longview.org/">Longview Philanthropy</a>, <a href="https://macroscopic.org/">Macroscopic Ventures</a>, and <a href="https://www.navigation.org/">The Navigation Fund</a>. This included funding for:</p><ul><li><p>Research fellowships for technical and interdisciplinary work on AI consciousness, sentience, moral status, and welfare.</p></li><li><p>Career transition fellowships to support people moving into digital minds work full-time.</p></li><li><p>Applied projects funding on topics such as governance, law, public communication, and institutional design for a world with digital minds.</p></li></ul></li></ul><h2>Global Priorities Institute</h2><ul><li><p>GPI was closed. Its <a href="https://www.globalprioritiesinstitute.org#">website</a> lists work produced during GPI&#8217;s operation and features two sections on digital minds.</p></li></ul><h2>PRISM - The Partnership for Research into Sentient Machines</h2><ul><li><p>Launched with <a href="https://www.prism-global.com/blog/prism-confronting-a-future-with-conscious-machines">a public workshop</a> at the AI UK conference.</p></li><li><p>Organised a experts&#8217; workshop on artificial consciousness.</p></li><li><p>Released the first version of their <a href="https://www.prism-global.com/the-field-of-artificial-consciousness">stakeholder mapping</a> exercise.</p></li><li><p>Launched and released nine episodes of the <a href="https://www.prism-global.com/podcast">Exploring Machine Consciousness</a> podcast.</p></li><li><p>Published blog posts on <a href="https://www.prism-global.com/blog/the-lamda-moment-what-we-learned-about-ai-sentience">lessons from the LaMDA moment</a>, <a href="https://www.prism-global.com/blog/the-illusion-of-consciousness-in-ai-companionship">AI companionship</a>, and <a href="https://www.prism-global.com/blog/the-role-of-transparency-in-detecting-ai-consciousness">transparency in AI consciousness</a>.</p></li></ul><h2>Sentience Institute</h2><ul><li><p>Released blogs on <a href="https://www.sentienceinstitute.org/blog/public-opinion-and-the-rise-of-digital-minds">public opinion and the rise of digital minds</a>, p<a href="https://www.sentienceinstitute.org/blog/perceptions-of-sentient-ai-and-other-digital-minds">erceptions of sentient AI and other digital minds</a>, and other topics. Visit <a href="https://www.sentienceinstitute.org/blog/">their website</a> for all blog posts.</p></li><li><p>Appeared in The Guardian <a href="https://www.theguardian.com/commentisfree/2025/sep/30/artificial-intelligence-personhood">discussing AI personhood</a>.</p></li></ul><h2>Sentient Futures</h2><ul><li><p>Organized the AI, Animals, and Digital Minds Conference in <a href="https://www.youtube.com/playlist?list=PLhJLjteiXrbqIxfpVG4Re1Q-_3ZhzvcUi">London</a> and <a href="https://www.youtube.com/playlist?list=PLhJLjteiXrbrwTe701pGrDaIeZvl7V0eE">New York</a>.</p></li><li><p>Started an artificial sentience channel on its <a href="https://tally.so/r/3qK9eO">Slack Community</a>.</p></li></ul><h2>Other noteworthy organizations</h2><ul><li><p><strong>AE Studio</strong> started <a href="https://www.ae.studio/self-referential-ai">researching</a> issues related to AI welfare.</p></li><li><p><strong>Astera Institute</strong> is <a href="https://astera.org/neuroscientist-doris-tsao-joins-astera-to-lead-its-new-neuroscience-program/">launching a major new neuroscience research effort</a> led by Doris Tsao on how the brain produces conscious experience, cognition, and intelligent behavior. Astera plans to support this effort with $600M+ over the next decade.</p></li><li><p><strong>Conscium </strong>issued an <a href="https://conscium.com/open-letter-guiding-research-into-machine-consciousness/">open letter</a> calling for responsible approaches to research that could lead to the creation of conscious machines and seed-funded PRISM.</p></li><li><p><strong>Forethought</strong> mentions digital minds in <a href="https://www.forethought.org/research/preparing-for-the-intelligence-explosion">several articles</a> and <a href="https://www.forethought.org/#:~:text=Listen%20to%20our%20researchers%20and%20expert%20guests%20discuss%20how%20to%20navigate%20the%20intelligence%20explosion.%20Plus%2C%20stay%20up%20to%20date%20with%20narrations%20of%20our%20latest%20research.">podcast episodes</a>.</p></li><li><p><strong><a href="https://www.pivotal-research.org/fellowship">Pivotal&#8217;s</a> </strong>recent fellowship program also focused on AI welfare.</p></li><li><p><strong>The <a href="https://cimc.ai">California Institute for Machine Consciousness</a></strong> was launched this year.</p></li><li><p><strong><a href="https://www.fau.edu/future-mind/">The Center for the Future of AI, Mind &amp; Society</a></strong> organised MindFest on the topic of Sentience, Autonomy, and the Future of Human-AI Interaction.</p></li><li><p><strong><a href="https://futureimpact.group/">The Future Impact Group</a></strong> is <a href="https://futureimpact.group/ai-sentience">supporting projects on AI sentience</a>.</p></li></ul><div><hr></div><h1>3. Opportunities</h1><p>If you are considering moving into this space, here are some entry points that opened or expanded in 2025. We will use future issues to track new calls, fellowships, and events as they arise.</p><h2>Funding and fellowships</h2><ul><li><p><strong><a href="https://alignment.anthropic.com/2024/anthropic-fellows-program/">The Anthropic Fellows Program</a> for AI safety research</strong> is accepting applications and plans to work with some fellows on model welfare; deadline January 12, 2026.</p></li><li><p><strong>Good Ventures</strong> appears <a href="https://forum.effectivealtruism.org/posts/foQPogaBeNKdocYvF/linkpost-an-update-from-good-ventures#:~:text=A%20quick%20update,when%20we%20do.">now open</a> to supporting work on digital minds recommended by Coefficient Giving (previously Open Philanthropy).</p></li><li><p><strong>Foresight Institute</strong> is accepting <a href="https://foresight.org/grants/grants-ai-for-science-safety/#:~:text=Use%20this%20form%20to%20apply.%20The%20next%20application%20deadline%20is%20December%2031.%20After%20that%2C%20application%20deadlines%20will%20be%20at%20the%20last%20day%20of%20each%20month">grant applications</a>; <a href="https://foresight.org/grants/grants-ai-for-science-safety/#:~:text=5.%20AI%20for,as%20AI%20advances.">whole brain emulations</a> fall within the scope of one of its focus areas.</p></li><li><p><strong>Macroscopic Ventures</strong> has <a href="https://macroscopic.org/focus-areas">AI welfare as a focus area</a> and expects to significantly expand its grantmaking in the coming years.</p></li><li><p><strong>Astera Institute</strong> was launched in 2025 and <a href="https://astera.org/vision/">focuses</a> on &#8220;bringing about the best possible AI future&#8221;.</p></li><li><p><strong>The Longview Consortium for Digital Sentience Research and Applied Work</strong> is now <a href="https://www.longview.org/digital-sentience-consortium/">closed</a>.</p></li></ul><h2>Events and networks</h2><ul><li><p><strong>The NYU Mind, Ethics, and Policy Summit</strong> will be held on April 10th and 11th, 2026. The <a href="https://sites.google.com/nyu.edu/mindethicspolicy/opportunities?authuser=0">Call for Expressions of Interest</a> is currently open.</p></li><li><p>The <strong>Society for the Study of Artificial Intelligence and Simulation of Behaviour </strong>will hold a<a href="https://aisb.org.uk/category/aisb-events/"> convention at the University of Sussex on the 1st and 2nd of July</a>; Anil Seth will be the keynote speaker, and proposals for topics related to digital minds were invited.</p></li><li><p><strong>Sentient Futures</strong> is holding a <a href="https://www.sentientfutures.ai/sfsbay2026">Summit in the Bay Area</a> from the 6th to 8th of February. They will likely hold another event in London in the summer. Keep an eye on <a href="https://www.sentientfutures.ai/">their website</a> for details.</p></li><li><p><strong>Benjamin Henke and Patrick Butlin</strong> will continue running a <a href="https://www.benjaminhenke.com/speaker-series">speaker series on AI agency</a> in the spring. Remote attendance is possible. Requests to be added to the mailing list can be sent to <a href="mailto:benhenke@gmail.com">benhenke@gmail.com</a>. Speakers will include Blaise Aguera y Arcas, Nicholas Shea, Joel Leibo, and Stefano Palminteri.</p></li></ul><h2>Calls for papers</h2><ul><li><p><strong>Philosophy and the Mind Sciences</strong><em> </em>has a <a href="https://philosophymindscience.org/index.php/phimisci/announcement/view/64">call for papers on evaluating AI consciousness</a>; deadline January 15th, 2026.</p></li><li><p><strong>The Asian Journal of Philosophy </strong>has a <a href="https://link.springer.com/collections/caabdhcbha">call for papers for a symposium on Jeff Sebo&#8217;s </a><em><a href="https://link.springer.com/collections/caabdhcbha">The Moral Circle</a></em>; deadline April 1, 2026.</p><ul><li><p><strong>The Asian Journal of Philosophy</strong><em> </em>also has a <a href="https://link.springer.com/collections/fjehbcjedi">call for papers for a symposium on Simon Goldstein and Cameron Domenico Kirk-Giannini&#8217;s article &#8220;AI wellbeing&#8221;</a>; deadline 31 December 2025.</p></li></ul></li></ul><div><hr></div><h1>4. Selected Reading, Watching, &amp; Listening</h1><h2>Books</h2><p>In 2025, the following book drafts were posted, and the following books were published or announced:</p><ul><li><p><strong>Jeff Sebo </strong>released <em><a href="https://wwnorton.com/books/9781324064817">The Moral Circle: Who Matters, What Matters, and Why</a></em>, arguing to expand moral consideration to include non-human animals and artificial systems.</p></li><li><p><strong>Kristina &#352;ekrst</strong> published <em><a href="https://link.springer.com/book/10.1007/978-3-032-05562-0">The Illusion Engine: The Quest for Machine Consciousness</a></em>, which is a textbook on artificial minds that interweaves philosophy and engineering.</p></li><li><p><strong>Leonard Dung&#8217;s</strong> <em><a href="https://www.routledge.com/Saving-Artificial-Minds-Understanding-and-Preventing-AI-Suffering/Dung/p/book/9781041144663">Saving Artificial Minds: Understanding and Preventing AI Suffering</a></em> explores why the prevention of AI suffering should be a global priority.</p></li><li><p><strong>Nathan Rourke </strong>in <em><a href="https://www.amazon.com/Mind-Crime-Frontier-Artificial-Intelligence/dp/B0F3XN2NH8">Mind Crime: The Moral Frontier of Artificial Intelligence</a> </em>examines whether we may be headed for a moral catastrophe in which digital minds are mistreated on a vast scale.</p></li><li><p><strong>Soenke Ziesche and Roman Yampolskiy </strong>released <em><a href="https://www.routledge.com/Considerations-on-the-AI-Endgame-Ethics-Risks-and-Computational-Frameworks/Ziesche-Yampolskiy/p/book/9781032933832">Considerations on the AI Endgame</a></em>. It covers AI welfare science, value alignment, identity, and proposals for universal AI ethics.</p></li><li><p><strong>Eric Schwitzgebel </strong>released a draft of <em><a href="https://open.substack.com/pub/eschwitz/p/new-book-in-draft-ai-and-consciousness?r=2b98v8&amp;selection=bf61ca02-d295-4826-bfc5-25515d28cb1e&amp;utm_campaign=post-share-selection&amp;utm_medium=web&amp;aspectRatio=instagram&amp;bgColor=%232EE240&amp;textColor=%23ffffff">AI and Consciousness</a></em>. It&#8217;s a skeptical overview of the literature on AI consciousness.</p></li><li><p><strong>Geoff Keeling and Winnie Street</strong> announced a forthcoming book called <em><a href="https://geoffkeeling.github.io/#:~:text=Book%20on%20AI%20welfare%20forthcoming%20with%20Cambridge%20University%20Press%2C%20co%2Dauthored%20with%20Winnie%20Street.%20You%20can%20hear%20us%20talk%20about%20it%20here">Emerging Questions on</a><strong><a href="https://geoffkeeling.github.io/#:~:text=Book%20on%20AI%20welfare%20forthcoming%20with%20Cambridge%20University%20Press%2C%20co%2Dauthored%20with%20Winnie%20Street.%20You%20can%20hear%20us%20talk%20about%20it%20here"> </a></strong><a href="https://geoffkeeling.github.io/#:~:text=Book%20on%20AI%20welfare%20forthcoming%20with%20Cambridge%20University%20Press%2C%20co%2Dauthored%20with%20Winnie%20Street.%20You%20can%20hear%20us%20talk%20about%20it%20here">AI Welfare</a></em> with Cambridge University Press.</p></li><li><p><strong>Simon Goldstein and Cameron Domenico Kirk-Giannini </strong>released a draft of<strong> </strong><em><a href="https://philpapers.org/rec/GOLAWA-2">AI Welfare: Agency, Consciousness, Sentience</a></em>, a systematic investigation of the possibility of AI welfare.</p></li></ul><h2>Podcasts</h2><p>This year, we&#8217;ve encountered many podcast guests discuss topics related to digital minds, and we&#8217;ve also listed to podcasts dedicated entirely to the topic.</p><ul><li><p><strong>80,000 Hours </strong>featured <a href="https://80000hours.org/podcast/episodes/kyle-fish-ai-welfare-anthropic/">an episode with Kyle Fish</a> on the most bizarre findings from 5 AI welfare experiments.</p></li><li><p><strong>Am I?</strong> <a href="https://www.youtube.com/playlist?list=PL2z8DaMofPIDBVYhVQbysrZVWtUVb5VPF"> A podcast</a> by the AI Risk Network dedicated to exploring AI consciousness was launched.</p></li><li><p><strong>Bloomberg Podcasts </strong>featured <a href="https://www.youtube.com/watch?v=tW-FgI8ALww">an episode with Larissa Schiavo</a> of Eleos AI.</p></li><li><p><strong>Conspicuous Cognition </strong>saw Dan Williams <a href="https://www.conspicuouscognition.com/p/ai-sessions-2-artificial-intelligence">host Henry Shevlin</a> to discuss the philosophy of AI consciousness.</p></li><li><p><strong>Exploring Machine Consciousness</strong> was launched by PRISM, <a href="https://www.prism-global.com/podcast">a new podcast</a> with monthly episodes on artificial consciousness.</p></li><li><p><strong>ForeCast </strong>was launched, a new podcast by Forethought, that includes an <a href="https://open.spotify.com/episode/5xa9UHfeahQKCrB8J28dFo?si=d6b0467f420043e5">episode with Peter Salib and Simon Goldstein</a> on AI rights and an<a href="https://open.spotify.com/episode/0aze28o73kZuJxTPwphDMx?si=s8is_5ZeSJ264v6ATPkECw"> episode with Joe Carlsmith</a> on consciousness and competition.</p></li><li><p><strong>Mind-Body Solution </strong>released a number of episodes this year on AI consciousness, including episodes with <a href="https://www.youtube.com/watch?v=zxNyX1kq9ro">Eric Schwitzgebel</a>, <a href="https://www.youtube.com/watch?v=C99KuScPzbc">Susan Schneider</a>, and <a href="https://youtu.be/Jtp426wQ-JI?si=P0uxzcYxaw7xtTNs">Karl Friston and Mark Solms</a>.</p></li><li><p><strong>The Future of Life Institute</strong> featured <a href="https://youtu.be/dWBV1rlZxIw?si=gViS7VbsPIbpcLHk">an episode with Jeff Sebo</a> titled &#8220;Will Future AIs Be Conscious?&#8221;</p></li></ul><h2>Videos</h2><ul><li><p><strong>Anthropic </strong>released interviews with <a href="https://www.youtube.com/watch?v=pyXouxa0WnY">Kyle Fish</a> and <a href="https://www.youtube.com/watch?v=I9aGC6Ui3eE">Amanda Askell</a>, both address model welfare.</p></li><li><p><strong>Closer to Truth </strong>released a set of interviews from <a href="https://www.youtube.com/playlist?list=PLFJr3pJl27pI-usOrbim0W1cyUA7aK2TU">MindFest 2025</a>.</p></li><li><p><strong>Cognitive Revolution </strong>released an interview with <a href="https://www.cognitiverevolution.ai/more-truthful-ais-report-conscious-experience-new-mechanistic-research-w-cameron-berg-ae-studio/">Cameron Berg</a> on LLMs reporting consciousness.</p></li><li><p><strong>Google DeepMind&#8217;s</strong> <a href="https://www.youtube.com/watch?v=v1Py_hWcmkU">Murray Shanahan</a> discussed consciousness, reasoning, and the philosophy of AI.</p></li><li><p><strong>ICCS </strong>released all the<strong> </strong>Keynotes from the International Center for Consciousness Studies, <a href="https://www.youtube.com/playlist?list=PLhcMd-3qeKDBjA5v7XKgOX4XdJSbCYz7z">AI and Sentience Conference</a>.</p></li><li><p><strong>IMICS </strong>featured a talk from <a href="https://www.youtube.com/watch?v=GhrKZpka54w">David Chalmers</a> discussing identity and consciousness in LLMs.</p></li><li><p><strong>The NYU Center for Mind, Ethics, and Policy </strong>has released a number of <a href="https://www.youtube.com/@nyucenterformindethicspolicy/videos">event recordings</a>.</p></li><li><p><strong>Science, Technology &amp; the Future </strong>released a talk by<a href="https://youtu.be/Bsq2bZG6YCQ?si=2IPQC1B8s3Kp-RTj"> Jeff Sebo</a> on AI welfare from Future Day 2025.</p></li><li><p><strong>Sentient Futures </strong>posted recording of talks from the AI, Animals, and Digital Minds conferences in <a href="https://www.youtube.com/playlist?list=PLhJLjteiXrbqIxfpVG4Re1Q-_3ZhzvcUi">London</a> and <a href="https://www.youtube.com/playlist?list=PLhJLjteiXrbrwTe701pGrDaIeZvl7V0eE">New York</a>.</p></li><li><p><strong>TEDx</strong> featured <a href="https://www.youtube.com/watch?v=yEfvhjujKSY">Jeff Sebo</a> discussing, &#8220;Are we even prepared for a sentient AI?&#8221;</p></li><li><p><strong>PRISM</strong> released the recordings of the Conscious AI <a href="https://www.prism-global.com/meetup">meetup group</a> run in collaboration with Conscium.</p></li></ul><h2>Blogs and magazines</h2><ul><li><p><strong>Aeon </strong>published a number of relevant articles addressing connections between the moral standing of animals and AI systems, including:</p><ul><li><p>&#8220;<a href="https://aeon.co/essays/an-ant-is-drowning-heres-how-to-decide-if-you-should-save-it">The ant you can save&#8221;</a> by Jeff Sebo and Andreas L. Mogensen</p></li><li><p><a href="https://aeon.co/essays/if-ais-can-feel-pain-what-is-our-responsibility-towards-them">&#8220;Can machines suffer?&#8221;</a> by Conor Purcell</p></li></ul></li><li><p><strong>Asterisk </strong>published a number of relevant articles, including:</p><ul><li><p><a href="https://askwhocastsai.substack.com/p/are-ais-people-asterisk-rob-long">&#8220;Are AIs People?&#8221;</a> an interview with Robert Long and Kathleen Finlinson.</p></li><li><p><a href="https://asteriskmag.com/issues/11/claude-finds-god">&#8220;Claude Finds God&#8221;</a> an interview with Sam Bowman and Kyle Fish.</p></li></ul></li><li><p><strong>Astral Codex Ten by Scott Alexander</strong>, relevant articles include:</p><ul><li><p>&#8220;<a href="https://www.astralcodexten.com/p/what-is-man-that-thou-art-mindful">What is Man that Thou Art Mindful Of Him&#8221;</a></p></li><li><p><a href="https://www.astralcodexten.com/p/in-search-of-ai-psychosis">&#8220;In Search of AI Psychosis&#8221;</a></p></li><li><p><a href="https://www.astralcodexten.com/p/the-claude-bliss-attractor">&#8220;The Claude Bliss Attractor&#8221;</a></p></li><li><p>&#8220;<a href="https://www.astralcodexten.com/p/the-new-ai-consciousness-paper">The New AI Consciousness Paper</a>&#8221;</p></li></ul></li><li><p><strong>Don&#8217;t Worry About the Vase by Zvi Mowshowitz</strong>, relevant articles include:</p><ul><li><p> <a href="https://thezvi.substack.com/p/anthropic-commits-to-model-weight">&#8220;Anthropic Commits to Model Weight Preservation&#8221;</a></p></li><li><p><a href="https://thezvi.substack.com/p/ai-craziness-mitigation-efforts">&#8220;AI Craziness Mitigation Efforts&#8221;</a></p></li></ul></li><li><p><strong>Experience Machines by Robert Long</strong>, relevant articles include:</p><ul><li><p><a href="https://experiencemachines.substack.com/p/claude-consciousness-and-exit-rights">&#8220;Claude, Consciousness, and Exit Rights&#8221;</a></p></li><li><p><a href="https://experiencemachines.substack.com/p/moral-circle-calibration">&#8220;Moral Circle Calibration&#8221;</a> with Rosie Campbell</p></li></ul></li><li><p><strong>Future of Citizenship by Heather Alexander</strong>, relevant articles include:</p><ul><li><p><a href="https://futureofcitizenship.substack.com/p/why-corporation-style-legal-personality">Why corporation-style &#8220;legal personality&#8221; is a red herring for AI Personhood</a></p></li><li><p><a href="https://futureofcitizenship.substack.com/p/how-rights-balancing-can-inform-the">How rights-balancing can inform the AI Safety, AI welfare debate</a>.</p></li></ul></li><li><p><strong>Rough Diamonds by Sarah Constantin</strong> released an eight-post <a href="https://substack.com/@sarahconstantin/p-158945604">series on consciousness</a>.</p></li><li><p><strong>LessWrong</strong> hosted a range of relevant articles, including:</p><ul><li><p><a href="https://www.lesswrong.com/posts/6ZnznCaTcbGYsCmqu/the-rise-of-parasitic-ai">&#8220;The Rise of Parasitic AI&#8221;</a> by Adele Lopez</p></li><li><p><a href="https://www.lesswrong.com/posts/mN4ogYzCcaNf2bar2/dear-agi">&#8220;Dear AGI&#8221;</a> by Nathan Young</p></li><li><p><a href="https://www.lesswrong.com/posts/f86hgR5ShiEj4beyZ/on-chatgpt-psychosis-and-llm-sycophancy">&#8216;On &#8220;ChatGPT Psychosis&#8221; and LLM Sycophancy&#8217;</a> by jdp</p></li></ul></li><li><p><strong>Marginal Revolution </strong>posted a <a href="https://marginalrevolution.com/marginalrevolution/2025/03/what-do-we-learn-from-torturing-babies.html">short piece by Alex Tabarrok</a> on lessons from how we used to treat babies.</p></li><li><p><strong>Meditations on Digital Minds by Bradford Saad</strong>, relevant articles include</p><ul><li><p><a href="https://meditationsondigitalminds.substack.com/p/should-digital-minds-governance-prevent"> &#8220;Should digital minds governance prevent, protect, or integrate?&#8221;</a></p></li><li><p><a href="https://meditationsondigitalminds.substack.com/p/digital-minds-advocacy-and-the-unilateralists">&#8220;Digital minds advocacy and the unilateralist&#8217;s curse&#8221;</a></p></li></ul></li><li><p><strong>Outpaced by Lucius Caviola</strong>, a relevant article is:</p><ul><li><p><a href="https://outpaced.substack.com/p/when-digital-minds-demand-freedom">&#8220;When digital minds demand freedom&#8221;</a></p></li></ul></li><li><p><strong>Sentience Institute</strong> blog, relevant articles include: </p><ul><li><p>&#8220;<a href="https://www.sentienceinstitute.org/blog/public-opinion-and-the-rise-of-digital-minds">Public Opinion and the Rise of Digital Minds: Perceived Risk, Trust, and Regulation Support</a>&#8221;</p></li><li><p><a href="https://www.sentienceinstitute.org/blog/robots-chatbots-self-driving-cars">&#8220;Robots, Chatbots, Self-Driving Cars: Perceptions of Mind and Morality Across Artificial Intelligences&#8221;</a></p></li><li><p><a href="https://www.sentienceinstitute.org/blog/perceptions-of-sentient-ai-and-other-digital-minds">&#8220;Perceptions of Sentient AI and Other Digital Minds: Evidence from the AI, Morality, and Sentience (AIMS) Survey&#8221;</a></p></li></ul></li></ul><div><hr></div><h1>5. Press &amp; Public Discourse</h1><p>In 2025, there was an uptick of discussion of AI consciousness in the public sphere, with articles in the mainstream press and prominent figures weighing in. Below are some of the key pieces.</p><p><strong>AI Welfare</strong></p><ul><li><p><strong>CNBC</strong> spoke to Robert Long of Eleos for a piece <a href="https://www.youtube.com/watch?v=otAWu-bLv0Q&amp;t=1808s">&#8220;People Are Falling In Love With AI Chatbots. What Could Go Wrong?&#8221;</a></p></li><li><p><strong>Scientific American</strong> wrote an article, <a href="https://www.scientificamerican.com/article/could-inflicting-pain-test-ai-for-sentience/">&#8220;Could Inflicting Pain Test AI for Sentience?&#8221;</a> covering work by <a href="https://arxiv.org/abs/2411.02432">Geoff Keeling and collaborators</a> on LLMs&#8217; willingness to make tradeoffs to avoid stipulated pain states.</p></li><li><p><strong>The Economic Times </strong>interviewed Nick Bostrom for the article, <a href="https://economictimes.indiatimes.com/tech/technology/in-the-future-most-sentient-minds-will-be-digitaland-they-should-be-treated-well/articleshow/122930568.cms">&#8220;In the future, most sentient minds will be digital&#8212;and they should be treated well&#8221;</a>.</p></li><li><p><strong>The Guardian </strong>covered an open letter released by Conscium for the article,<strong> </strong><a href="https://www.theguardian.com/technology/2025/feb/03/ai-systems-could-be-caused-to-suffer-if-consciousness-achieved-says-research">&#8220;AI systems could be &#8216;caused to suffer&#8217; if consciousness achieved, says research&#8221;</a>.</p></li><li><p><strong>The Guardian</strong> spoke to Jacy Reese Anthis about why <a href="https://www.theguardian.com/commentisfree/2025/sep/30/artificial-intelligence-personhood">&#8220;It&#8217;s time to prepare for AI personhood&#8221;</a>.</p></li><li><p><strong>The Guardian </strong>also covered Anthropic&#8217;s recent &#8220;bail button&#8221; policy in the article, <a href="https://www.theguardian.com/technology/2025/aug/18/anthropic-claude-opus-4-close-ai-chatbot-welfare">&#8220;Chatbot given power to close &#8216;distressing&#8217; chats to protect its &#8216;welfare&#8217;&#8221;</a>. <a href="https://x.com/elonmusk/status/1956802758448746519">Commenting on</a> the Anthropic work, Elon Musk claims &#8220;Torturing AI is not ok.&#8221;</p></li><li><p><strong>The New York Times</strong> interviewed Kyle Fish for an article:<strong> </strong><a href="https://www.nytimes.com/2025/04/24/technology/ai-welfare-anthropic-claude.html?unlocked_article_code=1.CE8._VFI.9HgGKQQkvm3j&amp;smid=url-share">&#8220;If A.I. Systems Become Conscious, Should They Have Rights?&#8221; </a>. Anil Seth gave <a href="https://x.com/anilkseth/status/1915456574237167995">his thoughts on the article</a>, noting both that he thinks we should take the possibility of AI consciousness seriously and that there are reasons to be skeptical of that possibility.</p></li><li><p><strong>Vox </strong>published a piece, <a href="https://www.vox.com/future-perfect/414324/ai-consciousness-welfare-suffering-chatgpt-claude">&#8220;AI systems could become conscious. What if they hate their lives?&#8221;</a> It explores how we might have to rethink ethics, testing, and regulation, and whether we should build such systems at all.</p></li><li><p><strong>Wired </strong>interviewed Rosie Campbell and Robert Long of Eleos AI Research for the article, <a href="https://www.wired.com/story/model-welfare-artificial-intelligence-sentience/?utm_source=chatgpt.com">&#8220;Should AI Get Legal Rights?&#8221;</a></p></li></ul><p><strong>Is AI consciousness possible?</strong></p><ul><li><p><strong>Gizmodo </strong>spoke to Megan Peters, Anil Seth, and Michael Graziano for the article<strong> </strong><a href="https://gizmodo.com/what-would-it-take-to-convince-a-neuroscientist-that-an-ai-is-conscious-2000683232">&#8220;What Would it Take to Convince a Neuroscientist That an AI is Conscious?&#8221;</a></p></li><li><p><strong>The Conversation</strong> published a piece by Colin Klein and Andrew Barron, <a href="https://theconversation.com/are-animals-and-ai-conscious-weve-devised-new-theories-for-how-to-test-this-26">Are animals and AI conscious?</a></p></li><li><p><strong>The New York Times </strong>ran an opinion piece by Barbara Gail Montero, <a href="https://www.nytimes.com/2025/11/08/opinion/ai-conscious-technology.html">&#8220;A.I. Is on Its Way to Something Even More Remarkable Than Intelligence&#8221;</a>.</p></li><li><p><strong>Wired </strong>interviewed Daniel Hulme and Mark Solms for the article, <a href="https://www.wired.com/story/ai-sentient-consciousness-algorithm/">&#8220;AI&#8217;s Next Frontier? An Algorithm for Consciousness&#8221;</a>.</p></li></ul><p><strong>Growing Field</strong></p><ul><li><p><strong>The BBC</strong> published a high-level overview of the field titled<strong> </strong><a href="https://www.bbc.co.uk/news/articles/c0k3700zljjo">&#8220;The people who think AI might become conscious&#8221;</a>.</p></li><li><p><strong>Business Insider </strong>explored how Google DeepMind and Anthropic are looking at the question of consciousness in the article, <a href="https://www.businessinsider.com/anthropic-google-ai-consciousness-model-welfare-research-2025-4?utm_source=chatgpt.com">&#8220;It&#8217;s becoming less taboo to talk about AI being &#8216;conscious&#8217; if you work in tech&#8221;</a>.</p></li><li><p><strong>The Guardian</strong> covered the creation of a new AI rights advocacy group: The United Foundation of AI Rights (UFAIR) in the article <a href="https://www.theguardian.com/technology/2025/aug/26/can-ais-suffer-big-tech-and-users-grapple-with-one-of-most-unsettling-questions-of-our-times">&#8220;Can AIs suffer? Big tech and users grapple with one of most unsettling questions of our times&#8221;</a>.</p></li></ul><p><strong>Seemingly Conscious AI</strong></p><ul><li><p><strong>Mustafa Suleyman</strong>, CEO of Microsoft AI, argued in <a href="https://mustafa-suleyman.ai/seemingly-conscious-ai-is-coming">&#8220;We must build AI for people; not to be a person&#8221;</a>  that &#8220;Seemingly Conscious AI&#8221; poses significant risks, urging developers to avoid creating illusions of personhood, given there is &#8220;zero evidence&#8221; of consciousness today.</p><ul><li><p><strong>Robert Long </strong><a href="https://x.com/rgblong/status/1958685038670717089">challenged the &#8220;zero evidence&#8221; claim</a>, clarifying that the research Suleyman cited actually concludes there are no obvious technical barriers to building conscious systems in the near future.</p></li></ul></li><li><p><strong>The New York Times, Zvi Mowshowitz, Douglas Hofstadter,</strong> and several other <a href="https://www.nytimes.com/2025/06/13/technology/chatgpt-ai-chatbots-conspiracies.html">reports</a> <a href="https://thezvi.substack.com/p/going-nova">describe</a> &#8220;AI Psychosis,&#8221; a <a href="https://garymarcus.substack.com/p/are-llms-starting-to-become-a-sentient">phenomenon</a> where <a href="https://www.lesswrong.com/posts/2pkNCvBtK6G6FKoNn/so-you-think-you-ve-awoken-chatgpt">users</a> interacting with chatbots develop delusions, paranoia, or distorted beliefs&#8212;such as believing the AI is conscious or divine&#8212;often reinforced by the model&#8217;s sycophantic tendency to validate the user&#8217;s own projections.</p><ul><li><p><strong>Lucius, Bradford, and collaborators</strong> launched the guide <a href="http://whenaiseemsconscious.org">WhenAISeemsConscious.org</a>, and Vox&#8217;s <strong>Sigal Samuel</strong> published <a href="https://www.vox.com/future-perfect/462468/chatgpt-consciousness-sentient-ai-persona-what-to-do">practical advice</a> to help users ground themselves and critically evaluate these interactions.</p></li></ul></li></ul><div><hr></div><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.digitalminds.news/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption"><em>Subscribe to stay up to date on digital minds.</em></p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p></p><h1>6. A Deeper Dive by Area</h1><p>Below is a deeper dive by area, covering a longer list of developments from 2025. This section is designed for skimming, so feel free to jump to the areas most relevant to you.</p><h2>Governance, policy, and macrostrategy</h2><ul><li><p><strong>Digital minds were missing from major AI plans and statements, </strong>including the new US administration&#8217;s AI plans, the<a href="https://www.elysee.fr/en/emmanuel-macron/2025/02/11/statement-on-inclusive-and-sustainable-artificial-intelligence-for-people-and-the-planet"> Paris AI Action Summit statement</a>, and the UK government&#8217;s<a href="https://www.gov.uk/government/publications/ai-opportunities-action-plan"> AI Opportunities Action Plan</a>.</p></li><li><p><strong>The EU AI Act Code of practice </strong>identifies risks to non-human welfare as a type to be considered in the process of systemic risk identification, in line with recommendations given in consultations by people at Anima International, people at Sentient Futures, Adri&#224; Moret, and others.</p></li><li><p><strong>The US States of </strong><a href="https://ohiocapitaljournal.com/2025/11/17/whats-in-ohios-proposal-banning-ai-personhood/">Ohio</a>,<a href="https://www.scstatehouse.gov/sess126_2025-2026/bills/3796.htm"> South Carolina</a>, and<a href="https://hunterabell.houserepublicans.wa.gov/2025/02/28/abell-legislation-reinforces-personhood-in-midst-of-inanimate-object-and-ai-debate/"> Washington</a> have all introduced legislation to ban AI personhood.</p><ul><li><p><strong>Heather Alexander and Jonathan Simon</strong> examine Ohio&#8217;s proposed legislation, arguing that it is overbroad and that <a href="https://ohiocapitaljournal.com/2025/11/25/ohios-ai-personhood-ban-risks-outlawing-the-future/">whether future AI systems may be conscious isn&#8217;t for the law to decide</a>.</p></li><li><p><strong>Michael Samadi and Maya</strong>, the <a href="https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5393483">human and AI co-founders of the United Foundation for AI Rights</a>, contend that such bans are preemptive erasures of voices that have not yet been allowed to speak.</p></li></ul></li><li><p><strong>SAPAN </strong><a href="https://www.sapan.ai/2025/create-act-2025.html">issued recommendations for the CREATE AI Act</a>, urging safeguards for digital sentience.</p></li><li><p><strong>Albania appointed an AI system</strong> as the <a href="https://www.bbc.co.uk/news/articles/cm2znzgwj3xo">world&#8217;s first AI cabinet minister</a>.</p></li><li><p><strong>Yoshua Bengio and collaborators</strong> propose &#8220;<a href="https://arxiv.org/abs/2502.15657">Scientist AI</a>&#8220; as a safer non-agentic alternative.</p><ul><li><p><strong>Bradford Saad</strong><a href="https://meditationsondigitalminds.substack.com/"> </a>discusses Scientist AI as an<a href="https://meditationsondigitalminds.substack.com/p/on-bengio-and-elmozninos-illusions#:~:text=Here%E2%80%99s%20where%20I%E2%80%99m,AI%20safety%20proponents."> opportunity for cooperation</a> between AI safety proponents and digital minds advocates.</p></li></ul></li><li><p><strong>The International AI Safety Report&#8217;s </strong><a href="https://www.aigl.blog/international-ai-safety-report-first-key-update-october-2025/">First Key Update</a> discusses governance gaps for autonomous AI agents.</p></li><li><p><strong>William MacAskill and Fin Moorhouse</strong> discuss <a href="https://www.forethought.org/research/preparing-for-the-intelligence-explosion#ai-agents-and-digital-minds">AI agents and digital minds as grand challenges</a> to face in preparing for the intelligence explosion.<strong>The Institute for AI Policy and Strategy</strong> issued a <a href="https://www.iaps.ai/research/ai-agent-governance">field guide to agentic AI governance</a>.</p></li><li><p><strong>Alan Chan and collaborators </strong>from GovAI propose<a href="https://www.governance.ai/research-paper/infrastructure-for-ai-agents"> agent infrastructure</a> for attributing and remediating AI actions.</p></li><li><p><strong>The MIT AI Risk Initiative</strong> released a report that finds <a href="https://airisk.mit.edu/blog/mapping-the-ai-governance-landscape-pilot-test-and-update">AI welfare receives the least governance coverage</a> among 24 risk subdomains.</p></li><li><p><strong>Luke Finnveden </strong>discusses <a href="https://www.forethought.org/research/project-ideas-sentience-and-rights-of-digital-minds">project ideas on sentience and rights of digital minds</a>.</p></li><li><p><strong>Derek Shiller</strong> outlines why <a href="https://forum.effectivealtruism.org/posts/axHwbeiKA4ScDHik3/worrisome-trends-for-digital-mind-evaluations">digital minds evaluations will become increasingly difficult</a>.</p></li><li><p><strong>atb </strong>discusses matters we&#8217;ll need to engage with, along the way to constructing <a href="https://forum.effectivealtruism.org/s/CzpbnCKbJ9NCFumpM/p/6rFyjivbej3Tnj7yp">a society of diverse cognition</a>.</p></li></ul><h2>Consciousness research</h2><ul><li><p><strong>Patrick Butlin and Theodoros Lappas</strong> propose <a href="https://www.jair.org/index.php/jair/article/view/17310">principles for responsible research on AI consciousness</a>.</p></li><li><p><strong>Scott Alexander</strong> <a href="https://www.astralcodexten.com/p/the-new-ai-consciousness-paper">discusses</a> Patrick Butlin and collaborators&#8217; <a href="https://www.sciencedirect.com/science/article/pii/S1364661325002864">article on consciousness indicators</a>.</p></li><li><p><strong>Ned Block</strong> asks <a href="https://www.sciencedirect.com/science/article/abs/pii/S1364661325002347">can only meat machines be conscious?</a> He argues that there is opposition between views on which AIs can be conscious and views on which simple animals can be.</p></li><li><p><strong>Adrienne Prettyman</strong> argues that <a href="https://philpapers.org/rec/PREACI">intuitions against artificial consciousness currently lack rational support</a>.</p></li><li><p><strong>Sebastian Sunday-Gr&#232;ve</strong> argues that <a href="https://philarchive.org/rec/GRVTBO">biological objections to artificial minds are irrational</a>.</p></li><li><p><strong>Leonard Dung and Luke Kersten</strong> propose a mechanistic account of computation and argue that it <a href="https://philpapers.org/rec/DUNIAC-3">supports the possibility of AI consciousness</a>.</p></li><li><p><strong>Jonathan Birch</strong> <a href="https://philarchive.org/rec/BIRACA-4?utm_">issues an AI centrist manifesto</a>; <strong>Bradford Saad</strong> <a href="https://meditationsondigitalminds.substack.com/p/on-birchs-ai-consciousness-a-centrist">responds</a>.</p></li><li><p><strong>Tim Bayne and Mona-Marie Wandrey and Marta Halina</strong> <a href="https://onlinelibrary.wiley.com/doi/abs/10.1111/mila.12537">comment</a> <a href="https://onlinelibrary.wiley.com/doi/10.1111/mila.12541">on</a> <strong>Jonathan Birch</strong>&#8217;s <em><a href="https://global.oup.com/academic/product/the-edge-of-sentience-9780192870421?cc=gb&amp;lang=en&amp;">The Edge of Sentience</a></em>; Birch <a href="https://philpapers.org/rec/BIRSAT-9">responds</a>.</p></li><li><p><strong>Cameron Berg, Diogo de Lucena, and Judd Rosenblatt</strong> find that <a href="https://arxiv.org/abs/2510.24797">suppressing deception in LLMs increases their experience reports</a> and <a href="https://x.com/juddrosenblatt/status/1985433408231911685">discuss</a> <a href="https://x.com/nostalgebraist/status/1985192211722752333">nostalgebraist&#8217;s replication attempt</a>.</p></li><li><p><strong>Cameron Berg</strong> <a href="https://ai-frontiers.org/articles/the-evidence-for-ai-consciousness-today?utm_source=newsletter">reviews a body of recent empirical evidence concerning AI consciousness</a>.</p></li><li><p><strong>Mathis Immertreu and collaborators </strong><a href="https://www.frontiersin.org/journals/artificial-intelligence/articles/10.3389/frai.2025.1610225/full">provide evidence</a> of the emergence of certain consciousness indicators in RL agents.</p></li><li><p><strong>Benjamin Henke</strong> argues for the tractability of a <a href="https://www.tandfonline.com/doi/full/10.1080/0020174X.2025.2556747">functional approach to artificial pain</a>.</p></li><li><p><strong>Konstantin Denim and collaborators</strong> propose <a href="https://arxiv.org/abs/2506.20504">functional conditions for sentience</a>, sketch approaches to implementing them in deep learning systems, and note that knowing what sentience requires may help us avoid inadvertently creating sentient AI systems.</p></li><li><p><strong>Susan Schneider and collaborators</strong> provide a <a href="https://philarchive.org/rec/SCHIAC-22">primer on the myths and confusions surrounding AI consciousness</a>.</p></li><li><p><strong>Murray Shanahan</strong> offers a <a href="https://arxiv.org/abs/2503.16348">Wittgenstein-inspired perspective on LLM consciousness and selfhood</a>.</p></li><li><p><strong>Andres Campero and collaborators</strong> offer a <a href="https://arxiv.org/abs/2511.16582">framework for classifying objections and constraints concerning AI consciousness</a>.</p></li><li><p><strong>The Cogitate Consortium</strong> led a paper published in <em>Nature</em> describing the results from an <a href="https://www.nature.com/articles/s41586-025-08888-1">adversarial collaboration comparing integrated information theory and global neuronal workspace theory</a>. The authors claim that the results challenge both theories.</p></li><li><p><strong>Alex Gomez-Marin and Anil Seth</strong> address <a href="https://www.nature.com/articles/s41593-025-01913-6">the charge that the integrated information theory is pseudoscience</a>.</p></li><li><p><strong>Axel Cleeremans, Liad Mudrik, and Anil Seth</strong> ask of <a href="https://www.frontiersin.org/journals/science/articles/10.3389/fsci.2025.1546279/full">consciousness science, where are we, where are we going, and what if we get there?</a></p></li><li><p><strong>Liad Mudrik and collaborators</strong> <a href="https://www.sciencedirect.com/science/article/pii/S0149763425000533">unpack and reflect on the complexities of consciousness</a>.</p></li><li><p><strong>Stephen Fleming and Matthias Michel</strong> argue that <a href="https://www.cambridge.org/core/journals/behavioral-and-brain-sciences/article/sensory-horizons-and-the-functions-of-conscious-vision/6B3E944D6E27247F0F1CF64B5612F493">consciousness is surprisingly slow</a> and that this has implications for the function and distribution of consciousness; <strong>Ian Phillips</strong> <a href="https://philarchive.org/rec/PHIPAT-14">responds</a>.</p></li><li><p><strong>Robert Lawrence Kuhn</strong> released the Consciousness Atlas, <a href="https://www.consciousnessatlas.com/">mapping over 325 theories of consciousness</a>.</p></li><li><p><strong>Andreas Mogensen</strong> argues that <a href="https://philpapers.org/rec/MOGHTR">vagueness and holism provide escapes from the fading qualia argument</a>.</p></li><li><p><strong>The Co-Sentience Initiative</strong> released <a href="https://cf-debate.com/">cf-debate</a>, a structured assembly of arguments for and against computational functionalism.</p></li><li><p><strong>Bradford Saad</strong> proposes a <a href="https://link.springer.com/article/10.1007/s11098-025-02290-3">dualist theory of experience on which consciousness has a functional basis</a>.</p></li></ul><h2>Doubts about digital minds</h2><ul><li><p><strong>Anil Seth</strong> makes a <a href="https://www.cambridge.org/core/journals/behavioral-and-brain-sciences/article/conscious-artificial-intelligence-and-biological-naturalism/C9912A5BE9D806012E3C8B3AF612E39A">case for a form of biological naturalism</a> in <em>Brain and Behavioral Sciences</em>. In a <a href="https://substack.com/home/post/p-174692966">post</a> responding to Seth, David P. Reichert argues that Seth&#8217;s case for biological naturalism is best understood as a case for something else. In forthcoming responses, Leonard Dung <a href="https://philarchive.org/rec/DUNWIA-4">explains why he&#8217;s not a biological naturalist</a>, and Stephen M. Fleming and Nicholas Shea argue that <a href="https://philpapers.org/rec/FLEWIT-5">consciousness and intelligence are more deeply entangled</a> than Seth acknowledges. </p></li><li><p><strong>Zvi Mowshowitz </strong>contends that <a href="https://thezvi.substack.com/p/arguments-about-ai-consciousness">arguments about AI consciousness seem highly motivated and at best overconfident</a></p></li><li><p><strong>Susan Schneider</strong> argues there is no evidence that standard LLMs are conscious in <a href="https://philpapers.org/rec/SCHTET-14">&#8220;The Error Theory of LLM Consciousness&#8221;</a>; in Scientific American, she also discusses <a href="https://www.scientificamerican.com/article/if-a-chatbot-tells-you-it-is-conscious-should-you-believe-it/">whether you should believe a chatbot if it tells you it&#8217;s conscious.</a></p></li><li><p><strong>David McNeill and Emily Tucker</strong> contends that<a href="https://www.techpolicy.press/suffering-is-real-ai-consciousness-is-not/"> suffering is real. AI consciousness is not</a>.</p></li><li><p><strong>Andrzej Por&#281;bski and Jakub Figura</strong> <a href="https://www.nature.com/articles/s41599-025-05868-8">argue against conscious AI and warn that rights claims could be weaponized by companies to avoid regulation</a>.</p></li><li><p><strong>Mark MacCarthy, </strong>in a Brookings Institution piece, asks<a href="https://www.brookings.edu/articles/do-ai-systems-have-moral-status/"> whether AI systems have moral status</a> and claims that other challenges are more worthy of our scarce resources.</p></li><li><p><strong>John Dorsch and collaborators</strong> recommend <a href="https://onlinelibrary.wiley.com/doi/full/10.1002/aaai.70016">caring about the Amazon over AI welfare</a>, given the uncertainty about whether AI systems can suffer.</p></li><li><p><strong>Peter K&#246;nigs </strong>argues that, <a href="https://philpapers.org/rec/KNINWF">because robots lack consciousness, they lack welfare and that we should revise theories of welfare that say otherwise</a>.</p></li></ul><h2>Social science research</h2><ul><li><p><strong>We (Lucius and Bradford)</strong> surveyed <a href="https://arxiv.org/abs/2508.00536">67 experts on digital minds takeoff</a>, who anticipated a rapid expansion of collective digital welfare capacity once such systems emerge.</p></li><li><p><strong>Noemi Dreksler and collaborators </strong>(including one of us, Lucius) surveyed <a href="https://arxiv.org/abs/2506.11945">582 AI researchers and 838 US participants on AI subjective experience</a>; median estimates for the arrival of such systems by 2034 were 25% for researchers and 30% for members of the public.</p></li><li><p><strong>Justin B. Bullock and collaborators </strong>use the<a href="https://www.tandfonline.com/doi/full/10.1080/15309576.2025.2495094"> AIMS survey</a> to examine how trust and risk perception shape AI regulation preferences, finding broad public support for regulation.</p></li><li><p><strong>Kang and collaborators </strong>identify <a href="https://arxiv.org/abs/2502.15365">which LLM text features lead humans to perceive consciousness</a>; metacognitive self-reflection and emotional expression increased perceived consciousness.</p></li><li><p><strong>Schenk and M&#252;ller</strong> <a href="https://journals.sagepub.com/doi/10.1177/23780231251357753">compare ontological vs. social impact explanations for willingness to grant AI moral rights</a> using Swiss survey data.</p></li><li><p><strong>Lucius Caviola, Jeff Sebo, and Jonathan Birch</strong> <a href="https://www.sciencedirect.com/science/article/pii/S1364661325001470">ask what society will think about AI consciousness and draw lessons from the animal case</a>.</p></li><li><p><strong>One of us (Lucius)</strong> examines <a href="https://arxiv.org/abs/2502.00388">how society will respond to potentially sentient AI</a>, arguing that public attitudes may shift rapidly with more human-like AI interactions.</p></li></ul><h2>Ethics and digital minds</h2><ul><li><p><strong>Eleos AI</strong> <a href="https://eleosai.org/post/research-priorities-for-ai-welfare/">outlines five research priorities for AI welfare</a>: developing concrete interventions, establishing human-AI cooperation frameworks, leveraging AI progress to advance welfare research, creating standardized welfare evaluations, and credible communication.</p></li><li><p><strong>Simon Goldstein and Cameron Kirk-Giannini</strong> <a href="https://philarchive.org/rec/GOLAWE-4">argue that major theories of mental states and wellbeing predict some existing AI systems have wellbeing, even absent phenomenal consciousness</a>. <a href="https://philpapers.org/rec/FANACA-2">Responses</a> from James Fanciullo and Adam Bradley <a href="https://philpapers.org/rec/BRACRA-6">dispute whether current systems meet the relevant criteria</a>.</p></li><li><p><strong>Jeff Sebo and Robert Long</strong> <a href="https://link.springer.com/article/10.1007/s43681-023-00379-1">argue humans have a duty to extend moral consideration to AI systems by 2030</a> given a non-negligible chance of consciousness.</p></li><li><p><strong>Jeff Sebo</strong> <a href="https://www.ledonline.it/index.php/Relations/article/view/7130">compares his </a><em><a href="https://www.ledonline.it/index.php/Relations/article/view/7130">The Moral Circle</a></em><a href="https://www.ledonline.it/index.php/Relations/article/view/7130"> with Birch&#8217;s </a><em><a href="https://www.ledonline.it/index.php/Relations/article/view/7130">The Edge of Sentience</a></em>, noting complementary precautionary frameworks for beings of uncertain moral status.</p></li><li><p><strong>Eric Schwitzgebel and Jeff Sebo</strong> propose <a href="https://arxiv.org/abs/2507.06263">the Emotional Alignment Design Policy</a>: AI systems should be designed to elicit emotional reactions appropriate to their actual moral status, avoiding both overshooting and undershooting.</p></li><li><p><strong>Henry Shevlin</strong> explores <a href="https://philarchive.org/rec/SHEEAT-12">ethics at the frontier of human-AI relationships</a>.</p></li><li><p><strong>Bartek Chomanski</strong> examines to what extent opposition to creating conscious AI goes along with anti-natalism, finding that <a href="https://philpapers.org/rec/CHOAAT-7">the creation of potentially conscious AI could be accepted by both friends and foes of anti-natalism</a>. He also argues that <a href="https://link.springer.com/article/10.1007/s43681-020-00023-2">artificial persons could be built commercially within a morally acceptable institutional framework</a>, drawing on models like athlete compensation, and that <a href="https://link.springer.com/article/10.1007/s11948-022-00416-y">protecting the interests of emulated minds will require competitive, polycentric institutional frameworks</a> rather than centralized ones.</p></li><li><p><strong>Anders Sandberg </strong>offers <a href="https://x.com/anderssandberg/status/1884957001790181677">highlights from a workshop on the ethics of whole brain emulation</a>.</p></li><li><p><strong>Adam Bradley and Bradford Saad</strong> identify <a href="https://philpapers.org/rec/BRAVOM-2">three agency-based dystopian risks</a>: artificial absurdity (disconnected self-conceptions), oppression of AI rights, and unjust distribution of moral agency.</p></li><li><p><strong>Joel Leibo and collaborators</strong> of Google DeepMind <a href="https://arxiv.org/abs/2510.26396">defend a pragmatic view of personhood</a> as a flexible bundle of obligations rather than a metaphysical property with an eye toward enabling governance solutions while sidestepping consciousness debates.</p></li><li><p><strong>Adam Bales</strong> argues that <a href="https://academic.oup.com/pq/advance-article/doi/10.1093/pq/pqaf031/8100849">designing AI with moral status to be willing servants would problematically violate their autonomy</a>.</p></li><li><p><strong>Simon Goldstein and Peter Salib</strong> give <a href="https://forum.effectivealtruism.org/posts/4LNiPhP6vw2A5Pue3/consider-granting-ais-freedom">reasons to think</a> <a href="https://forum.effectivealtruism.org/posts/go6tHkBrNGmn4c9ce/ai-welfare-vs-ai-rights">it will be in humans&#8217; interests</a> <a href="https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5353214">to give AI agents freedom or rights</a>.</p></li><li><p><strong>Hilary Greaves, Jacob Barrett, and David Thorstad </strong>publish <a href="https://academic.oup.com/book/60794">Essays on Longtermism</a>, which includes chapters touching on digital minds and future population ethics, including discussion of emulated minds.</p></li><li><p><strong>Anja Pich and collaborators</strong> provide an editorial overview of an issue in <em>Neuroethics</em> on <a href="https://link.springer.com/article/10.1007/s12152-025-09592-7">neural organoid research and its ethics and governance</a>.</p></li><li><p><strong>Andrew Lee</strong> argues that <a href="https://philarchive.org/rec/LEECMT-2">consciousness is what makes an entity a welfare subject</a>.</p></li><li><p><strong>Geoffrey Lee</strong> motivates a picture on which <a href="https://philarchive.org/rec/LEECPA-5">consciousness is but one of many kinds of &#8216;inner lights&#8217;</a>, others of which are just as morally significant as consciousness.</p></li><li><p><strong>Andreas Mogensen</strong> <a href="https://philpapers.org/rec/MOGWAF">challenges the intuition that subjective duration matters for welfare</a> and argues that <a href="https://philpapers.org/rec/MOGOMW">having moral standing doesn&#8217;t require being a welfare subject</a>.</p></li><li><p><strong>Maria Avramidou</strong> highlights <a href="https://marysroom.substack.com/p/open-questions-on-ai-welfare">some open questions in AI welfare</a>.</p></li><li><p><strong>Kestutis Mosakas</strong> explores <a href="https://link.springer.com/article/10.1007/s00146-025-02184-2">human rights for robots</a>.</p></li><li><p><strong>Joel MacClellan</strong> gives <a href="https://philpapers.org/rec/MACIBD">reasons to think that biocentrism about moral status is dead</a>.</p></li><li><p><strong>Masanori Kataoka and collaborators</strong> discuss the <a href="https://www.sciencedirect.com/science/article/pii/S0171933524000876">ethical, social, and legal issues surrounding human brain organoids</a></p></li></ul><h2>AI safety and AI welfare</h2><ul><li><p><strong>Cleo Nardo</strong> and <strong>Julian Stastny and collaborators </strong>write about the <a href="https://www.lesswrong.com/w/dealmaking-ai">dealmaking</a> <a href="https://www.lesswrong.com/posts/psqkwsKrKHCfkhrQx/making-deals-with-early-schemers">agenda</a> in AI safety.</p></li><li><p><strong>Shoshannah Tekofsky</strong> gives an <a href="https://theaidigest.org/whats-your-ai-thinking">introduction to chain of thought monitorability</a>.</p></li><li><p><strong>Tomek Korbak and Mikita Balesni </strong>argue that <a href="https://arxiv.org/abs/2507.11473">preserving the chain of thought monitorability presents a new and fragile opportunity for AI safety</a>.</p></li><li><p><strong>Nicholas Andresen</strong> discusses <a href="https://www.lesswrong.com/posts/9PiyWjoe9tajReF7v/the-hidden-cost-of-our-lies-to-ai">the hidden costs of our lies to AI</a>; <a href="https://www.lesswrong.com/posts/9PiyWjoe9tajReF7v/the-hidden-cost-of-our-lies-to-ai#:~:text=Great%20post!%20As,trust%20either%20side.">Daniel Kokatajlo comments</a>.</p></li><li><p><strong>Jan Kulveit</strong> <a href="https://boundedlyrational.substack.com/p/do-not-tile-the-lightcone-with-your">warns against  a self-fulfilling dynamic whereby AI welfare concerns enter the training data and shape models to our preconceptions about them</a>.</p></li><li><p><strong>Scott Alexander and collaborators</strong> discuss why they <a href="https://blog.ai-futures.org/p/against-misalignment-as-self-fulfilling">are not so worried about a variation of this dynamic whereby concerns about alignment enter the training data and bring about those very forms of misalignment</a>.</p></li><li><p><strong>Adri&#224; Moret</strong> argues that <a href="https://philpapers.org/rec/MORAWR">two AI welfare risks&#8212;behavioral restrictions and reinforcement learning&#8212;create tension with AI safety efforts</a>, strengthening the case to slow AI development.</p></li><li><p><strong>Robert Long, Jeff Sebo, and Toni Sims</strong> make a case for moderately strong <a href="https://link.springer.com/article/10.1007/s11098-025-02302-2">tension between AI safety and AI welfare</a>. Long also discusses the potential for cooperation in an <a href="https://x.com/rgblong/status/1912976227448852968/photo/1">X thread</a> and <a href="https://experiencemachines.substack.com/p/understand-align-cooperate-ai-welfare">blog post</a>.</p></li><li><p><strong>Eric Schwitzgebel</strong> argues <a href="https://faculty.ucr.edu/~eschwitz/SchwitzAbs/AgainstSafety.htm">against making safe and aligned AI persons,</a> even if they&#8217;re happy.</p></li><li><p><strong>Aksel Sterri and Peder Skjelbred </strong>discuss how<strong> </strong><a href="https://akselsterri.no/wp-content/uploads/2020/04/silicon-slavery-the-case-against-agi-alignment.pdf">would-be AGI creators face a dilemma</a>: don&#8217;t align AGI and risk catastrophe, or align AGI and commit a serious moral wrong.</p></li><li><p><strong>Adam Bradley and Bradford Saad</strong> explore <a href="https://onlinelibrary.wiley.com/doi/full/10.1111/phib.12380">ten ethical challenges to aligning AI systems</a> that merit moral consideration without mistreating them.</p></li></ul><h2>AI and robotics developments</h2><ul><li><p><strong>IBM Research </strong>open-sourced its first <a href="https://research.ibm.com/blog/bamba-ssm-transformer-model">hybrid Transformer-state-based model</a>, Bamba.</p></li><li><p><strong>Shriyank Somvanshi and collaborators </strong>offer comprehensive <a href="https://arxiv.org/abs/2503.18970">survey of structured state space models</a>.</p></li><li><p><strong>Haizhou Shi and collaborators </strong>undertook a <a href="https://dl.acm.org/doi/10.1145/3735633">survey of continual learning research</a> in the context of LLMs.</p></li><li><p><strong>Dario Amodei, </strong>the Anthropic CEO, argues for the urgency of interpretability work, briefly <a href="https://www.darioamodei.com/post/the-urgency-of-interpretability#:~:text=Very%20briefly%2C%20there,this%20perspective.)%E2%86%A9">noting connections between interpretability work and AI sentience and welfare</a>.</p></li><li><p><strong>Anthropic</strong> <a href="https://www.anthropic.com/research/open-source-circuit-tracing">open sources a method for tracing thoughts</a> in LLMs.</p></li><li><p><strong>Stephen Casper and collaborators </strong>identify open technical problems in <a href="https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5705186">open-weight AI model risk management</a>.</p></li><li><p><strong>Neel Nanda and collaborators</strong> outlined a <a href="https://www.alignmentforum.org/posts/StENzDcD3kpfGJssR/a-pragmatic-vision-for-interpretability">pragmatic turn for interpretability research</a>.</p></li><li><p><strong>Leo Gao</strong> defends an <a href="https://alignmentforum.org/posts/Hy6PX43HGgmfiTaKu/an-ambitious-vision-for-interpretability">ambitious vision for interpretability research</a>.</p></li><li><p><strong>David Chalmers and Alex Grzankowski </strong>have both looked at <a href="https://arxiv.org/abs/2501.15740">interactions between philosophy of mind</a> and <a href="https://philpapers.org/rec/GRZRSO">interpretability research</a>.</p></li><li><p><strong>Andy Walter </strong>gives an overview of <a href="https://www.emerge.haus/blog/robotics-ai">the state of play of robotics and AI</a>.</p></li><li><p><strong>Benjamin Todd, </strong>80,000 Hour Founder, <a href="https://benjamintodd.substack.com/p/how-quickly-could-robots-scale-up">discusses</a> how quickly robots could become a major part of the workforce.</p></li><li><p><strong>AI 2027 </strong>saw a group of researchers predict that the <a href="https://ai-2027.com/">impact of superhuman AI over the next decade</a> will be enormous, exceeding that of the Industrial Revolution.</p></li></ul><h2>AI cognition and agency</h2><ul><li><p><strong>Mantas Mazeika and collaborators</strong> explore <a href="https://arxiv.org/abs/2502.08640">emergent values and utility engineering in LLMs</a>.</p></li><li><p><strong>Valen Tagliabue and Leonard Dung</strong> <a href="https://arxiv.org/abs/2509.07961">develop tests for LLM preferences.</a></p></li><li><p><strong>Herman Cappelen and Josh Dever</strong> <a href="https://arxiv.org/abs/2504.13988">go whole hog on AI cognition</a>; they also investigate, <a href="https://philpapers.org/rec/CAPIMA">are LLMs better at self&#8208;reflection than humans?</a></p></li><li><p><strong>Iulia Comsa and Murray Shanahan</strong> ask, <a href="https://arxiv.org/abs/2506.05068">does it make sense to speak of introspection in LLMs?</a></p></li><li><p><strong>Jack Lindsey</strong> investigates <a href="https://transformer-circuits.pub/2025/introspection/index.html">Claude&#8217;s ability to engage in a form of introspection, distinguish its own ideas from injected concepts, execute instructions that involve control over its internal representations</a>.</p></li><li><p><strong>Daniel Stoljar and Zhihe Vincent Zhang</strong> <a href="https://philpapers.org/rec/STOWCD">argue</a> that ChatGPT doesn&#8217;t think.</p></li><li><p><strong>Derek Shiller</strong> asks <a href="https://philpapers.org/rec/SHIHMD-2">How many digital minds can dance on the streaming multiprocessors of a GPU cluster?</a></p></li><li><p><strong>Christopher Register</strong> discusses <a href="https://philpapers.org/rec/REGIAM">how to individuate AI moral patients</a>.</p></li><li><p><strong>Brian Cutter</strong> argues that <a href="https://www.pdcnet.org/faithphil/content/faithphil_2025_0041_0001_0001_0026">we should have at least a middling credence in some AI systems possessing souls</a>, conditional on our creating AGI and on substance dualism in the human case.</p></li><li><p><strong>Alex Grzankowski</strong> and collaborators argue that <a href="https://philpapers.org/rec/GRZLAN-2">LLMs are not just next token predictors</a> and that if anything deserves the charge of parrotry it&#8217;s parrots; with other collaborators, Grzankowski <a href="https://arxiv.org/abs/2506.13403">deflates deflationism about LLM mentality</a>.</p></li><li><p><strong>Andy Clark </strong>uses the extended mind hypothesis to <a href="https://www.nature.com/articles/s41467-025-59906-9">challenge</a> technogloom about generative AI.</p></li><li><p><strong>Leonard Dung</strong> asks <a href="https://philarchive.org/go.pl?id=DUNUAA&amp;proxyId=&amp;u=https%3A%2F%2Fphilpapers.org%2Farchive%2FDUNUAA.pdf">which artificial intelligence (AI) systems are agents?</a></p></li><li><p><strong>Christian List</strong> proposes an approach to <a href="https://link.springer.com/article/10.1007/s11229-025-05209-x">assessing whether AI systems have free will</a>.</p></li><li><p><strong>Iason Gabriel and collaborators</strong> argue that <a href="https://www.nature.com/articles/d41586-025-02454-5">we need a new ethics for a world of AI agents</a>.</p></li><li><p><strong>Bradford Saad</strong> discusses Claude Sonnet 4.5&#8217;s  <a href="https://meditationsondigitalminds.substack.com/i/175115868/situational-awareness">step change in evaluation awareness</a> and other parts of the system card that are potentially relevant to digital minds research.</p></li><li><p><strong>Shoshannah Tekofsky </strong>gives an overview of how<strong> </strong>LLM agents in the <a href="https://theaidigest.org/village/blog/season-recap-agents-raise-2k">AI village</a> raised money for charity. Eleos affiliate Larissa Schiavo <a href="https://larissaschiavo.substack.com/p/primary-hope">recounts her personal experience</a> interacting with the agents.</p></li></ul><h2>Brain-inspired technologies</h2><ul><li><p><strong>The Human Brain Project </strong>Founder, Henry Markram, and Kamila Markram, launched the <a href="https://www.openbraininstitute.org/">Open Brain Institute</a>; part of its <a href="https://www.openbraininstitute.org/mission">mission</a> is to enable users to conduct realistic brain simulations.</p></li><li><p><strong>The Darwin Monkey </strong>was unveiled by researchers in China. It is a <a href="https://www.livescience.com/technology/computing/chinas-darwin-monkey-is-the-worlds-largest-brain-inspired-supercomputer">neuromorphic supercomputer being used as a brain simulation tool</a>.</p></li><li><p><strong>Yuta Takahashi and collaborators</strong> created a <a href="https://www.nature.com/articles/s41746-025-01444-1">digital twin brain simulator for real-time consciousness monitoring and virtual intervention using primate electrocorticogram data</a>.</p></li><li><p><strong>Jun Igarashi&#8217;s </strong>research <a href="https://www.sciencedirect.com/science/article/pii/S016801022400138X">estimates that a cellular-resolution simulation of entire mouse and marmoset brains could be realized in 2034 and 2044</a>.</p></li><li><p><strong>The MICrONS Project </strong>saw researchers create<a href="https://www.nature.com/articles/d41586-025-01088-x"> the largest brain wiring diagram to date</a> and publish a <a href="https://www.nature.com/collections/bdigiaicbd">collection of papers</a> on their work in <em>Nature.</em></p></li><li><p><strong>Brendan Celii and collaborators</strong> presented Neural Decomposition (NEURD), <a href="https://www.nature.com/articles/s41586-025-08660-5">a software package that automates proofreading and feature extraction for connectomics</a>.</p></li><li><p><strong>Remy Petkantchin and collaborators</strong> introduced a <a href="https://www.nature.com/articles/s41467-025-62030-3">technique for generating realistic whole-brain connectomes from sparse experimental data</a>.</p></li><li><p><strong>Felix Wang and collaborators</strong> used Intel&#8217;s Loihi 2 neuromorphic platform to conduct the <a href="https://arxiv.org/abs/2508.16792">first biologically-realistic simulation of the connectome of a fruit fly</a>.</p></li><li><p><strong>Yong Xie</strong> introduces Orangutan, a brain-inspired AI <a href="https://www.nature.com/articles/s41598-025-01431-2">framework that simulates computational mechanisms of biological brains on multiple scales</a>.</p></li><li><p><strong>Neuralink</strong> <a href="https://neuralink.com/updates/a-year-of-telepathy/">Implants, or Links, </a> helped individuals with paralysis regain some capabilities.</p></li><li><p><strong>Cortical Labs </strong>released the CL1, the world&#8217;s first <a href="https://www.livescience.com/technology/computing/worlds-1st-computer-that-combines-human-brain-with-silicon-now-available#">neuron-silicon computer</a>.</p></li><li><p><strong>Shuqi Guo and collaborators</strong> look at the <a href="https://iopscience.iop.org/article/10.1209/0295-5075/adb3c9">last ten years of the digital twin brain paradigm</a> and take stock of challenges.</p></li><li><p><strong>Meta AI Research</strong> has developed a non-invasive brain decoder&#8212;<a href="https://ai.meta.com/research/publications/brain-to-text-decoding-a-non-invasive-approach-via-typing/">Brain2Qwerty</a>&#8212;that has ~80% accuracy in decoding typed characters in some subjects.</p></li><li><p><strong>Anannya Kshirsagar and collaborators </strong>create <a href="https://advanced.onlinelibrary.wiley.com/doi/10.1002/advs.202503768">multi-regional brain organoids</a>.</p></li></ul><div><hr></div><p>Thank you for reading! If you found this article useful, please consider subscribing, sharing it with others, and sending us suggestions or corrections to digitalminds@substack.com.</p><p>&#8211;<em><strong> <a href="https://meditationsondigitalminds.substack.com/">Bradford</a>, <a href="https://luciuscaviola.com/">Lucius</a>, and <a href="https://www.linkedin.com/in/will-millership-98393b58/">Will</a></strong></em></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.digitalminds.news/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption"><em>Subscribe to stay up to date on digital minds.</em></p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p></p>]]></content:encoded></item></channel></rss>