This report highlights another 5 of the most intriguing scan hits from our horizon scanning. All point to emerging AI possibilities, moving beyond today’s LLM-centric hype. Each signal is evaluated through the CIPHER lens, analysing implications, timelines, and confidence. All signals surfaced or gained fresh attention in May and June 2025.
1. Physical Nano-Electronic Networks: Sparsity Outperforms Density
Source: https://arxiv.org/pdf/2505.16813
Date: May 21, 2025
Signal
Mainstream AI thinking says more connections yield superior performance, but the tech industry's obsession with "scaling up" may give way to a new philosophy of "scaling smart.”
Signal Description
A groundbreaking study demonstrates that sparse nano-electronic networks coupled with nonlinear dynamics dramatically outperform dense architectures in neuromorphic temporal processing, fundamentally challenging the "bigger is better" scaling assumptions that have dominated AI hardware design for decades. The research shows that by coupling node activities to edge dynamics through nonlinear nano-electronic elements, sparse networks create emergent computational capabilities that cannot be reduced to conventional neural architectures.
This discovery represents a radical departure from mainstream AI development, which has consistently pursued larger, denser networks, assuming that more connections yield superior performance. The findings suggest that computational efficiency may arise from constraint rather than capacity, directly contradicting the scaling laws that have driven billions in hardware investment. Most remarkably, the sparse networks achieve superior temporal processing performance while requiring significantly less energy and computational resources.
CIPHER Category
Primary – Contradictions: This signal exposes a fundamental contradiction in AI development orthodoxy, that network density correlates with computational capability. It reveals the tension between the promise of neuromorphic efficiency and the practice of brute-force scaling characterising current AI approaches. The research demonstrates that decades of investment in maximising connectivity may have been pursuing the wrong optimisation target.
Secondary – Inflections: The signal suggests a potential inflection point where hardware design pivots from maximising network density to optimising sparsity and dynamic coupling. This could launch a new design logic in neuromorphic hardware, similar to how RISC architectures disrupted complex instruction set computing.
Implications
Short-term (2025–2028): Neuromorphic hardware companies will rapidly prototype sparse architectures, leading to a new generation of energy-efficient AI chips. Research labs will shift focus from dense neural networks to sparse, dynamic systems, potentially reducing barriers to entry for smaller organisations with limited computational resources. Early edge computing and IoT adopters will deploy sparse neuromorphic systems for real-time processing applications.
Mid-term (2030s): The semiconductor industry could undergo a fundamental restructuring as sparse architectures enable dramatically more efficient AI hardware. This paradigm shift may democratise AI development, allowing smaller companies to compete with tech giants by leveraging resource-efficient sparse designs. Autonomous systems, robotics, and brain-computer interfaces will benefit from ultra-low-power sparse processors that can operate in resource-constrained environments.
Cultural Shifts: The tech industry's obsession with "scaling up" may give way to a new philosophy of "scaling smart," where efficiency and elegance are valued over raw computational power. Engineers and researchers may develop new aesthetic and design principles around minimalism and constraint-based optimisation. Public perception of AI may shift from viewing it as computationally intensive to naturally efficient when properly designed.
Broader Trend: This signal aligns with growing environmental concerns about AI's energy consumption and represents a movement toward sustainable computing. It foreshadows a future where biological principles of efficiency guide technological development, marking a shift from silicon-centric to bio-inspired design philosophies. The discovery may catalyse broader questions about optimisation in complex systems across multiple disciplines.
Estimated Timeline to Impact
Short (1-2 years): Technical readiness is exceptionally high, as sparse architectures can be rapidly prototyped using existing fabrication techniques. Early commercial applications in specialised domains like edge AI and sensor networks are likely within 18 months. The strong economic incentives around energy efficiency will accelerate adoption.
Signal Type
Technological: This represents a fundamental breakthrough in AI hardware design methodology with profound implications for computational theory and practice.
Confidence Level
High: The research provides concrete experimental evidence and measurable performance improvements. The underlying principles are well-grounded in established nano-electronics and can be validated through reproducible experiments. Multiple research groups are likely to replicate and extend these findings rapidly.
Signal Strength
Strong/Accelerating: This signal has moved beyond theoretical possibility to demonstrated capability. The clear performance advantages and economic incentives suggest rapid adoption and further development. The paradigm challenge is fundamental enough to reshape entire research and development trajectories.
Analyst Intuition
This feels like a watershed moment, the discovery that makes previous approaches seem flawed in retrospect. The elegance of achieving better results with less complexity has the hallmarks of a genuine paradigm shift. Our intuition suggests this will become one of those "obvious" innovations everyone claims to have seen coming, despite its radical departure from conventional wisdom. The signal has the potential to trigger a cascade of related discoveries as researchers explore the broader implications of constraint-based optimisation in complex systems.
2. Neuromorphic Mimicry Attacks (NMAs): The Security Paradox of Brain-Inspired Computing
Source: https://arxiv.org/pdf/2505.17094
Date: May 23, 2025
Signal
NMAs represent a new class of cyber threats that traditional cybersecurity frameworks cannot address.
Signal Description
The first systematic analysis of security vulnerabilities in neuromorphic computing reveals that neuromorphic mimicry attacks (NMAs) can exploit probabilistic processing and non-deterministic behaviour to make brain-inspired systems efficient. The research demonstrates that the very features neuromorphic systems use to achieve biological-like efficiency, stochastic processing, temporal dynamics, and probabilistic outputs create unprecedented attack vectors.
Most alarmingly, the study shows that security and neuromorphic efficiency may be fundamentally incompatible, suggesting that widespread adoption of brain-inspired computing may require new security paradigms. The attacks can remain undetected while subtly manipulating system behaviour, potentially compromising critical infrastructure, autonomous vehicles, and medical devices that rely on neuromorphic processors. This discovery overturns the assumption that neuromorphic systems inherit the security robustness of their digital predecessors, revealing a misalignment between bio-inspired efficiency and traditional threat models.
CIPHER Category
Primary – Contradictions: This signal exposes the fundamental contradiction between neuromorphic efficiency and system security. It reveals that the biological inspiration that makes these systems attractive also introduces vulnerabilities with no parallel in conventional computing. The research demonstrates that security and performance optimisation in neuromorphic systems may represent competing objectives.
Secondary – Extremes: The signal highlights an extreme position that neuromorphic security represents an entirely new domain requiring novel theoretical frameworks. It pushes the boundaries of cybersecurity by identifying threats that emerge from the intersection of biology and technology.
Implications
Short-term (2025–2028): Neuromorphic hardware deployment in critical systems will slow as security researchers scramble to develop appropriate countermeasures. A new field of neuromorphic cybersecurity will emerge, requiring specialised expertise in brain-inspired computing and security analysis. Insurance companies and regulatory bodies will demand new assessment frameworks for neuromorphic systems before approving their use in high-stakes applications.
Mid-term (2030s): The cybersecurity industry will split into conventional and neuromorphic security domains, requiring different expertise and tools. Neuromorphic systems may require built-in security monitoring that continuously adapts to new attack patterns, fundamentally changing how we design secure computing systems. The arms race between neuromorphic attackers and defenders could drive rapid innovation and create new forms of systemic risk.
Cultural Shifts: Public trust in AI systems may split between "secure" conventional systems and "vulnerable" neuromorphic systems. The security community will develop new professional specialisations and certification programs for neuromorphic security. Policymakers will face pressure to regulate neuromorphic systems differently from conventional computing.
Broader Trend: This signal represents the emergence of "biological cybersecurity,” security challenges that arise from mimicking biological systems. As more technologies adopt biologically inspired architectures, it foreshadows a broader horizon of security challenges, inviting a rethinking of trust, resilience, and predictability in complex systems. The discovery may catalyse the development of "security by design" principles specifically for biologically inspired systems.
Estimated Timeline to Impact
Short (1-2 years): Attack feasibility has been demonstrated in simulation, and real-world implementations could follow rapidly. The urgency of protecting critical infrastructure will accelerate countermeasure development despite limited current technical readiness.
Signal Type
Technological: This represents a security breakthrough that reveals fundamental vulnerabilities in an emerging computing paradigm.
Confidence Level
Medium: While the theoretical framework is sound and initial demonstrations are convincing, the real-world impact remains uncertain. The threat's novelty makes it difficult to assess how quickly effective countermeasures can be developed.
Signal Strength
Medium/Building: This is an emerging signal with significant potential impact. Its trajectory will depend on how quickly the security community can develop effective responses and whether attackers can exploit these vulnerabilities at scale.
Analyst Intuition
This is a wake-up call for the neuromorphic computing community and should force a reconsideration of deployment strategies. The irony that biological inspiration creates security vulnerabilities suggests deeper tensions between natural and artificial systems. This will become a defining challenge for neuromorphic computing, determining whether these systems can transition from research curiosities to deployed infrastructure.
3. Non-Equilibrium Biological Computation: Computing in the Wild
Source: bioRxiv.org
Date: May 16, 2025
Signal
Research suggests that our entire conception of computation may be constrained by digital thinking that prioritises stability over the dynamic richness of living systems.
Signal Description
The research demonstrates that biological systems can perform meaningful computation before reaching steady-state equilibrium, fundamentally challenging the stability-seeking assumptions underlying both digital and analog computing. Using sequestration-based neural networks operating out-of-equilibrium, researchers showed that transient, dynamic biological processes contain untapped computational resources (sequestration refers to a biochemical mechanism where molecules bind temporarily, enabling logic-like operations through transient bindings). This discovery suggests that non-equilibrium dynamics, not equilibrium optimisation, may be the key to understanding biological intelligence.
The implications are staggering; this research opens the possibility of computation that embraces instability and dynamism as virtues rather than problems to be solved. Unlike conventional computing systems that seek stable, predictable states, these biological networks harness chaos and uncertainty to perform complex information processing.
CIPHER Category
Primary – Contradictions: This signal contradicts the equilibrium-centric foundation of computational theory by demonstrating value in transient, unstable biological dynamics. It reveals the tension between our stability-seeking engineering paradigms and the inherently dynamic nature of biological intelligence.
Secondary – Extremes: The signal represents an extreme departure from conventional computing by embracing instability and chaos as computational resources. It pushes the boundaries of what we consider "computation" by finding information processing in systems we would typically consider too unstable to be useful.
Implications
Short-term (2025–2028): Synthetic biology researchers e.g. synthetic biology labs (e.g., Wyss Institute, MIT Media Lab), or neuromorphic computing researchers will begin designing biological circuits that exploit non-equilibrium dynamics for computation. Early applications may emerge in biosensing and environmental monitoring where biological systems naturally operate in non-equilibrium conditions. Computer scientists will develop new theoretical frameworks for understanding computation in dynamic, unstable systems.
Mid-term (2030s): Hybrid bio-digital systems could emerge that use non-equilibrium biological components for adaptive, context-sensitive computation. Industries dealing with complex, changing environments may adopt non-equilibrium computational approaches for real-time adaptation and learning. This paradigm could revolutionise our approach to artificial life and self-organising systems.
Cultural Shifts: The computing community may develop new appreciation for uncertainty and instability as creative forces rather than problems to be eliminated. Biological systems may be viewed as sophisticated computers rather than just inspiration for artificial systems. The boundary between living and computational systems may become increasingly blurred.
Broader Trend: This signal aligns with growing interest in dynamic, adaptive systems across multiple scientific disciplines. It represents a shift toward embracing complexity and emergence rather than seeking to control and predict. The discovery may catalyse new approaches to artificial intelligence that prioritise adaptability over optimisation.
Estimated Timeline to Impact
Long (5-10+ years) / Speculative: The experimental and theoretical foundations are at an early stage, requiring significant development before practical applications emerge. The paradigm shift required to embrace non-equilibrium computation represents a fundamental change in how we think about information processing.
Signal Type
Technological/Conceptual: This represents both a technical discovery and a fundamental conceptual shift in understanding computation and biological intelligence.
Confidence Level
Low: While the initial results are intriguing, the complexity of non-equilibrium biological systems makes it difficult to predict whether this approach can be reliably harnessed for practical computation. Significant technical and theoretical challenges remain.
Signal Strength
Weak/Emerging: This is a weak signal with potentially transformative implications. Its development trajectory is highly uncertain, but the conceptual breakthrough could reshape computational theory if it proves robust.
Analyst Intuition
This feels like a glimpse into an alien form of computation, one that operates by principles so foreign to our digital intuitions and entrenched mechanistic thinking (ontological shock) that it may take years to fully comprehend its implications. The idea of harnessing chaos for computation suggests we may be discovering fundamental principles of information processing in living systems. Our intuition is that this represents either a profound breakthrough or an intriguing dead end, the kind of discovery that could reshape our understanding of intelligence.
4. E.coli as Unmodified Computational Substrate: Life as Computer
Source: bioRxiv.org
Date: May 22, 2025
Signal
Groundbreaking research demonstrates that wild-type Escherichia coli bacteria can serve as physical reservoirs for computation—without any genetic modification.
Signal Description
Using a framework known as reservoir computing, the study shows that the dynamic growth behaviour of living E. coli populations can be repurposed to perform machine learning tasks such as regression and classification. This discovery challenges the fundamental assumption that biological computation requires engineered or synthetic organisms. The researchers demonstrated that the inherent dynamics of living bacterial populations possess sufficient computational complexity to support regression and classification tasks, including medically relevant examples, such as classifying COVID-19 patient plasma samples by disease severity.
Rather than constructing synthetic biological circuits, the team leveraged the intrinsic behaviour of E. coli as a physical reservoir. This system successfully processed inputs (in the form of nutrient media) and produced useful outputs (growth profiles interpreted via machine learning). Remarkably, the bacterium could distinguish complex clinical categories based solely on how it grew in the presence of different biological samples.
This suggests that computation may be an emergent property of living systems, reframing how we think about intelligence, information processing, and the boundary between biological and artificial systems. Instead of engineering life to compute, we may begin to observe and extract the computation already present within life itself.
CIPHER Category
Primary – Hacks: This signal represents a fundamental "hack" of natural biological processes, repurposing bacterial growth for computation in ways that evolution never intended. It demonstrates how unmodified biological systems can be co-opted for technological purposes.
Secondary – Contradictions: The work contradicts the assumption that biological computation requires genetic engineering or synthetic biology. It challenges the notion that useful computation requires designed, engineered systems.
Implications
The implications extend far beyond biotechnology. This work suggests that computation may be an emergent property of any sufficiently complex biological system. Rather than engineering biology to compute, we may be able to harness the natural computational processes already present in living systems. This represents a radical reframing of the relationship between life and computation, suggesting that the boundary between biological and artificial intelligence may be more porous than previously imagined.
Short-term (2025–2028): Biotechnology companies will explore using natural biological processes for specialised computing applications. Research labs with limited resources for genetic engineering will gain access to biological computation using readily available organisms. Early applications may emerge in environmental sensing and biomonitoring where living computers can operate in natural environments.
Mid-term (2030s): A new industry of "living computers" could emerge, using unmodified organisms for specialised computational tasks. This approach may enable distributed biological sensing networks that can evolve and adapt to changing environmental conditions. The paradigm could revolutionise our approach to sustainable computing by using self-replicating biological systems instead of manufactured hardware.
Cultural Shifts: The distinction between natural and artificial systems may become increasingly meaningless as we discover computation everywhere in the biological world. Society may develop new ethical frameworks for using living organisms as computational resources. The role of genetic engineering in biotechnology may shift from creating new capabilities to optimising natural ones.
Broader Trend: This signal aligns with growing interest in sustainable, self-replicating technologies that minimise environmental impact. It represents a shift toward working with natural systems rather than engineering around them. The discovery may catalyse new approaches to distributed computing using biological networks.
Estimated Timeline to Impact
Medium (3-5 years): While proof-of-concept exists, scaling, standardisation, and integration into practical workflows will require significant development. The biological nature of the systems means deployment must account for living system requirements.
Signal Type
Technological/Biological: This represents a hybrid breakthrough that blurs the boundaries between technology and biology.
Confidence Level
Medium: The experimental results are convincing, but questions remain about scalability, reliability, and practical deployment of living computational systems. The approach's limitations and failure modes are not yet fully understood.
Signal Strength
Medium/Building: This signal has moved beyond theoretical possibility to demonstrated capability. Its trajectory will depend on overcoming practical challenges of working with living systems.
Analyst Intuition
This discovery has a science-fictional quality, the idea that ordinary bacteria are secretly computers waiting to be discovered. It feels like we're uncovering a hidden layer of reality where computation is ubiquitous in the natural world. Our intuition is that this represents a paradigm shift toward viewing life itself as a computational medium, potentially leading to new forms of biological technology that we can barely imagine.
5. Embodied Perspective Taking: Synthetic Worlds Reveal Limits of Language-Only AI
Source: https://arxiv.org/pdf/2505.14366
Date: May 20 2025
Signal
A paradigm-shifting framework demonstrates that spatial reasoning and Visual Perspective Taking (VPT),the ability to infer and understand others' viewpoints, cannot emerge from language data alone but requires embodied interaction with simulated or physical environments.
Signal Description
This work leverages synthetic datasets, artificially constructed 3D environments, to train and evaluate AI models in Visual Perspective Taking (VPT), a cognitive skill essential for understanding spatial relationships and the viewpoints of other agents.
Crucially, the research demonstrates that spatial understanding cannot be acquired from language data alone. Instead, it requires direct, embodied interaction with a (real or simulated) world, challenging the prevailing assumption that large language models (LLMs) can internalise spatial cognition purely from text. This signal reframes the debate on AI generalisation, suggesting that grounded, physically situated experience is an indispensable ingredient for advanced machine intelligence.
CIPHER Category
Primary – Contradictions:
This signal directly contradicts the widespread belief that spatial reasoning can be learned from language alone. It exposes a fundamental tension in AI research, the assumption that scaling up language models will yield general intelligence versus the evidence that certain cognitive abilities, such as perspective taking, require embodied or simulated experience.
Secondary – Extremes:
The work takes an extreme (but increasingly credible) position that embodiment is not just beneficial but foundational for spatial intelligence. It pushes the boundaries of AI research by insisting that certain forms of reasoning are impossible to achieve without grounding in a physical or simulated world.
Implications
Short-term (2025–2028):
AI and robotics labs will begin to adopt synthetic world datasets for training agents in spatial tasks, such as navigation, manipulation, and multi-agent interaction.
Human-robot interaction (HRI) researchers will use these frameworks to develop and benchmark social reasoning capabilities in machines.
Early applications may emerge in virtual assistants, gaming AI, and simulation-based education, where perspective taking is valuable.
Mid-term (2030s):
The field may experience a paradigm shift as embodied, spatially-grounded AI systems demonstrate capabilities that language-only models cannot match, such as robust navigation, theory of mind, and context-aware interaction.
Research into machine empathy, social intelligence, and collaborative robotics will accelerate, leveraging VPT as a core competency.
Adjacent fields, including cognitive science and developmental psychology, may benefit from insights into how perspective taking emerges in artificial and biological systems.
Second-Order Effects:
The dominance of language-centric AI may be destabilised, with funding and attention shifting toward embodied intelligence.
New interdisciplinary collaborations may form between AI, neuroscience, robotics, and the social sciences to explore the foundations of perspective taking and spatial cognition.
The public’s understanding of AI may evolve, recognising that intelligence is not just about processing language but about being in the world.
Cultural Shifts: As AI systems begin to exhibit situated reasoning through embodied simulations, public narratives may shift away from disembodied "superintelligence" tropes toward more ecologically grounded models of artificial cognition, influencing education, policy, and ethics debates
Ethical debates may arise about the treatment and rights of embodied AI agents, especially as they acquire more human-like social and spatial reasoning skills.
Broader Trend: This signal fits within a larger movement toward embodied AI, where intelligence is seen as inseparable from the agent’s physical or simulated context. It echoes trends in cognitive science that emphasise the role of sensorimotor experience in the development of mind.
Estimated Timeline to Impact
Long (5–10+ years): While the conceptual framework and synthetic datasets are emerging now, the technical, social, and governance infrastructure for fully embodied, spatially-grounded AI is still nascent. Widespread adoption and integration into mainstream AI practice will require advances in simulation technology, robotics, and cross-disciplinary collaboration.
Signal Type
Technological/Conceptual: This represents both a technical advance (in synthetic world training and evaluation) and a conceptual challenge to language-centric AI paradigms.
Confidence Level
Moderate: The need for embodiment in spatial cognition is supported by cognitive science and initial experimental results, but the generalisability and scalability of synthetic world training remain to be proven at scale. While early findings are promising, future robustness will depend on reproducibility and adoption of shared benchmarks in embodied VPT tasks.
Signal Strength
Weak/Emerging: This is an early-stage signal. Its influence is growing but not yet central. Its future strength will depend on the success of embodied AI systems in surpassing language-only models in real-world tasks.
Analyst Intuition
This signal is small but potentially foundational for the next era of AI. The metaphor “giving AI a body, not just a voice” captures its essence: true intelligence may require a situated perspective, not just abstract reasoning. Beneath the surface, this signal hints at the emergence of machine empathy and richer forms of social intelligence, with implications reaching far beyond robotics into the fabric of how AI understands and interacts with the world.