There are currently no widely accepted, industry-wide standards for managing reproductive health risks in space, the study notes. The researchers highlight unresolved questions around preventing inadvertent early pregnancy during missions, understanding the fertility impacts of microgravity and radiation, and setting ethical boundaries for any future reproduction-related research beyond Earth.
“If reproduction is ever to occur beyond Earth,” the study notes, “it must do so with a clear commitment to safety, transparency and ethical integrity.”
This research is described in a paper published Feb. 3 in the journal Reproductive Biomedicine Online.
In this Mind-Body Solution Colloquia, Michael Levin and Robert Chis-Ciure challenge one of neuroscience’s deepest assumptions: that cognition and intelligence are exclusive to brains and neurons.
Drawing on cutting-edge work in bioelectricity, developmental biology, and philosophy of mind, this conversation explores how cells, tissues, and living systems exhibit goal-directed behavior, memory, and problem-solving — long before neurons ever appear.
We explore: • Cognition without neurons. • Bioelectric networks as control systems. • Memory and learning beyond synapses. • Morphogenesis as collective intelligence. • Implications for AI, consciousness, and ethics.
This episode pushes neuroscience beyond the neuron, toward a deeper understanding of mind, life, and intelligence as continuous across scales.
Bach reframes AI as the endpoint of a long philosophical project to “naturalize the mind,” arguing that modern machine learning operationalizes a lineage from Aristotle to Turing in which minds, worlds, and representations are computational state-transition systems. He claims computer science effectively re-discovers animism—software as self-organizing, energ†y-harvesting “spirits”—and that consciousness is a simple coherence-maximizing operator required for self-organizing agents rather than a metaphysical mystery. Current LLMs only simulate phenomenology using deepfaked human texts, but the universality of learning systems suggests that, when trained on the right structures, artificial models could converge toward the same internal causal patterns that give rise to consciousness. Bach proposes a biological-to-machine consciousness framework and a research program (CIMC) to formalize, test, and potentially reproduce such mechanisms, arguing that understanding consciousness is essential for culture, ethics, and future coexistence with artificial minds.
Key takeaways.
▸ Speaker & lens: Cognitive scientist and AI theorist aiming to unify philosophy of mind, computer science, and modern ML into a single computationalist worldview. ▸ AI as philosophical project: Modern AI fulfills the ancient ambition to map mind into mathematics; computation provides the only consistent language for modeling reality and experience. ▸ Computationalist functionalism: Objects = state-transition functions; representations = executable models; syntax = semantics in constructive systems. ▸ Cyber-animism: Software as “spirits”—self-organizing, adaptive control processes; living systems differ from dead ones by the software they run. ▸ Consciousness as function: A coherence-maximizing operator that integrates mental states; second-order perception that stabilizes working memory; emerges early in development as a prerequisite for learning. ▸ LLMs & phenomenology: Current models aren’t conscious; they simulate discourse about consciousness using data full of “deepfaked” phenomenology. A Turing test cannot detect consciousness because performance ≠ mechanism. ▸ Universality hypothesis: Different architectures optimized for the same task tend to converge on similar internal causal structures; suggests that consciousness-like organization could arise if it’s the simplest solution to coherence and control. ▸ Philosophical zombies: Behaviorally identical but non-conscious agents may be more complex than conscious ones; evolution chooses simplicity → consciousness may be the minimal solution for self-organized intelligence. ▸ Language vs embodiment: Language may contain enough statistical structure to reconstruct much of reality; embodiment may not be strictly necessary for convergent world models. ▸ Testing for machine consciousness: Requires specifying phenomenology, function, search space, and success criteria—not performance metrics. ▸ CIMC agenda: Build frameworks and experiments to recreate consciousness-like operators in machines; explore implications for ethics, interfaces, and coexistence with future minds.
Are we building AI that enhances humanity or a master race of beautifully optimized psychopaths?
My latest Singularity. FM conversation with Dr. Eve Poole goes straight to the nerve:
What makes us human, and what happens when we leave that out of our machines?
Eve argues that the very things Silicon Valley dismisses as “junk code”—our emotions, intuition, uncertainty, meaning-making, story, conscience, even our mistakes—aren’t flaws in our design. They’re the *reason* our species survived. And we’re coding almost none of it into AI.
The result? Systems with immense intelligence but no soul, no context, no humanity—and therefore, no reason to value us.
In this wide-ranging conversation, we dig into:
🔹 Why the real hallmarks of humanity aren’t IQ but junk code 🔹 Consciousness, soul, and the limits of rationalist AI thinking 🔹 Theology, capitalism & tech: how we ended up copying the wrong parts of ourselves 🔹 Why “alignment” is really a parenting challenge, not a control problem 🔹 What Tolkien, u-catastrophe, and ancient stories can teach us about surviving the future 🔹 Why programming in humanity isn’t for AI’s sake—it’s for ours.
The 2026 Timeline: AGI Arrival, Safety Concerns, Robotaxi Fleets & Hyperscaler Timelines ## The rapid advancement of AI and related technologies is expected to bring about a transformative turning point in human history by 2026, making traditional measures of economic growth, such as GDP, obsolete and requiring new metrics to track progress ## ## Questions to inspire discussion.
Measuring and Defining AGI
🤖 Q: How should we rigorously define and measure AGI capabilities? A: Use benchmarks to quantify specific capabilities rather than debating terminology, enabling clear communication about what AGI can actually do across multiple domains like marine biology, accounting, and art simultaneously.
🧠 Q: What makes AGI fundamentally different from human intelligence? A: AGI represents a complementary, orthogonal form of intelligence to human intelligence, not replicative, with potential to find cross-domain insights by combining expertise across fields humans typically can’t master simultaneously.
📊 Q: How can we measure AI self-awareness and moral status? A: Apply personhood benchmarks that quantify AI models’ self-awareness and requirements for moral treatment, with Opus 4.5 currently being state-of-the-art on these metrics for rigorous comparison across models.
🤖 Q: How quickly will AI and robotics replace human jobs? A: AI and robotics will do half or more of all jobs within the next 3–7 years, with white-collar work being replaced first, followed by blue-collar labor through humanoid robots.
🏢 Q: What competitive advantage will AI-native companies have? A: Companies that are entirely AI-powered will demolish competitors, similar to how a single manually calculated cell in a spreadsheet makes it unable to compete with entirely computer-based spreadsheets.
💼 Q: What forces companies to adopt more AI? A: Companies using more AI must outcompete those using less, creating a forcing function for increased AI adoption, as inertia currently keeps humans doing AI-capable tasks.
📊 Q: How much of enterprise software development can AI handle autonomously? A: Blitzy, an AI platform using thousands of specialized agents, autonomously handles 80%+ of enterprise software development, increasing engineering velocity 5x when paired with human developers.
🧠 VIDEO SUMMARY: CRISPR gene editing in 2025 is no longer science fiction. From curing rare immune disorders and type 1 diabetes to lowering cholesterol and reversing blindness in mice, breakthroughs are transforming medicine today. With AI accelerating precision tools like base editing and prime editing, CRISPR not only cures diseases but also promises longer, healthier lives and maybe even longevity escape velocity.
0:00 – INTRO — First human treated with prime editing. 0:35 — The DNA Problem. 1:44 – CRISPR 1.0 — The Breakthrough. 3:19 – AI + CRISPR 2.0 & 3.0 4:47 – Epigenetic Reprogramming. 5:54 – From the Lab to the Body. 7:28 – Risks, Ethics & Power. 8:59 – The 2030 Vision.
👇 Don’t forget to check out the first three parts in this series: Part 1 – “Longevity Escape Velocity: The Race to Beat Aging by 2030″ Part 2 – “Longevity Escape Velocity 2025: Latest Research Uncovered!“ Part 3 – “Longevity Escape Velocity: How AI is making us immortal by 2030!”
📌 Easy Insight simplifies the future — from longevity breakthroughs to mind-bending AI and quantum revolutions.
🔍 KEYWORDS: longevity, longevity escape velocity, AI, artificial intelligence, quantum computing, supercomputers, simplified science, easy insightm, CRISPR 2025, CRISPR gene editing, CRISPR cures diseases, CRISPR longevity, prime editing 2025, base editing 2025, AI in gene editing, gene editing breakthroughs, gene therapy 2025, life extension 2025, reversing aging with CRISPR, CRISPR diabetes cure, CRISPR cholesterol PCSK9, CRISPR ATTR amyloidosis, CRISPR medical revolution, Easy Insight longevity.
In the 21st century, new powerful technologies, such as different artificial intelligence (AI) agents, have become omnipresent and the center of public debate. With the increasing fear of AI agents replacing humans, there are discussions about whether individuals should strive to enhance themselves. For instance, the philosophical movement Transhumanism proposes the broad enhancement of human characteristics such as cognitive abilities, personality, and moral values (e.g., Grassie and Hansell 2011; Ranisch and Sorgner 2014). This enhancement should help humans to overcome their natural limitations and to keep up with powerful technologies that are increasingly present in today’s world (see Ranisch and Sorgner 2014). In the present article, we focus on one of the most frequently discussed forms of enhancement—the enhancement of human cognitive abilities.
Not only in science but also among the general population, cognitive enhancement, such as increasing one’s intelligence or working memory capacity, has been a frequently debated topic for many years (see Pauen 2019). Thus, a lot of psychological and neuroscientific research investigated different methods to increase cognitive abilities, but—so far—effective methods for cognitive enhancement are lacking (Jaušovec and Pahor 2017). Nevertheless, multiple different (and partly new) technologies that promise an enhancement of cognition are available to the general public. Transhumanists especially promote the application of brain stimulation techniques, smart drugs, or gene editing for cognitive enhancement (e.g., Bostrom and Sandberg 2009). Importantly, only little is known about the characteristics of individuals who would use such enhancement methods to improve their cognition. Thus, in the present study, we investigated different predictors of the acceptance of multiple widely-discussed enhancement methods. More specifically, we tested whether individuals’ psychometrically measured intelligence, self-estimated intelligence, implicit theories about intelligence, personality (Big Five and Dark Triad traits), and specific interests (science-fiction hobbyism) as well as values (purity norms) predict their acceptance of cognitive enhancement (i.e., whether they would use such methods to enhance their cognition).
A new study published in Applied Psychology provides evidence that the belief in free will may carry unintended negative consequences for how individuals view gay men. The findings suggest that while believing in free will often promotes moral responsibility, it is also associated with less favorable attitudes toward gay men and preferential treatment for heterosexual men. This effect appears to be driven by the perception that sexual orientation is a personal choice.
Psychological research has historically investigated the concept of free will as a positive force in social behavior. Scholars have frequently observed that when people believe they have control over their actions, they tend to act more responsibly and helpfully. The general assumption has been that a sense of agency leads to adherence to moral standards. However, the authors of the current study argued that this sense of agency might have a “dark side” when applied to social groups that are often stigmatized.
The researchers reasoned that if people believe strongly in human agency, they may incorrectly attribute complex traits like sexual orientation to personal decision-making. This attribution could lead to the conclusion that gay men are responsible for their sexual orientation.