This report highlights another 5 of the most intriguing scan hits from our horizon scanning. All point to emerging AI possibilities, moving beyond today’s LLM-centric hype. Each signal is evaluated through the CIPHER lens, analysing implications, timelines, and confidence. All signals surfaced or gained fresh attention in April and May 2025.
1. AI Co-Author Aims to Revolutionise Pure Mathematics Research
Source - The Register
April 27, 2025
Signal Description
DARPA’s Exponentiating Mathematics (expMath) initiative seeks to develop an AI collaborator for pure mathematics, capable of proposing abstractions and proving theorems alongside human researchers.
Unlike automated theorem provers, expMath emphasises a creative partnership-AI would assist in brainstorming conjectures, translating intuition into formal proofs, and identifying patterns across mathematical domains.
The program explicitly frames AI as a co-author, requiring mathematicians to guide problem selection and interpret results. This challenges the assumption that abstract innovation is uniquely human, potentially accelerating a field historically limited by manual effort (<1% annual growth in new theorems).
CIPHER Category
Primary – Contradiction: Blurs the human/machine divide in theoretical creativity, decoupling mathematical progress from sole reliance on human intuition.
Secondary – Inflection: DARPA’s three-year timeline (2025–2028) and $45M budget signal urgency, potentially catalysing a cultural shift in academia toward AI collaboration.
Implications
Short-term (2025–2028): Tools like Lean proof assistant may integrate AI-generated lemmas (a lemma is a proven intermediate statement used as a stepping stone to prove larger theorems), easing verification. Mathematicians like Terence Tao foresee AI streamlining formalisation but retaining human oversight.
Mid-term (2030s): If successful, expMath could spill into theoretical physics, cryptography, and topology, accelerating fields bottlenecked by manual proof work.
Cultural Shifts: Debates over authorship credit and “understanding” (e.g., accepting AI proofs without human intuition) may reshape academic incentives.
Broader Trend: Part of a growing “AI-for-science” wave (e.g., AlphaFold, AI-driven quantum chemistry) targeting foundational disciplines.
Estimated Timeline to Impact
2025–2028: DARPA-funded prototypes focus on specific domains (e.g., algebraic topology).
Post-2028: Adoption hinges on overcoming AI’s current abstraction limits. Partial success (AI as a “proof assistant++)” is more likely than full autonomy by 2030.
Signal Type
Technological/Institutional: A targeted government effort intersecting AI and theoretical science.
Confidence Level
Moderate-High: DARPA’s focus on human-AI collaboration (not replacement) aligns with existing tools like Lean, reducing technical risk. However, AI’s ability to mimic human abstraction remains unproven.
Signal Strength
High in Pure Maths, Medium in broader science: While unique in targeting abstract reasoning, it aligns with trends like AI-generated hypotheses in physics.
Analyst Intuition
This initiative is an early sign of AI moving up the value chain of intellectual tasks, a bellwether for AI’s encroachment into high-abstraction domains. Even modest progress could normalise AI as a research partner, subtly shifting how breakthroughs are achieved. The key tension lies in balancing speed (AI-generated proofs) with depth (human understanding). Although there may be an initial phase of scepticism and adjustment in the research community, we can anticipate the possibility that AI may force a new epistemology of discovery.
2. BCI on a Budget: Webcam-Based Brain-Computer Interface Hack
Source - arxiv.org
April 21, 2025
Signal Description
A researcher has introduced “NeuGaze,” a novel system that reimagines brain-computer interfaces (BCI) by replacing EEG headsets or implants with a standard laptop webcam, powered by artificial intelligence.
Instead of directly capturing neural signals, NeuGaze uses AI-driven computer vision models to analyse subtle neck-up signals, such as eye gaze, head movements, and facial expressions, and decode user intent for real-time computer control.
The system achieves performance comparable to conventional input devices, supporting tasks such as cursor navigation, key triggering via a skill wheel, and dynamic gaming.
NeuGaze requires minimal calibration and no specialised hardware, relying on AI software and a standard 30 Hz camera. The prototype, tested by the researcher, enabled complete hands-free game control using only head, face, and eye movements. By leveraging AI to extract intent from visual cues, NeuGaze blurs the line between assistive technology and BCI, challenging the notion that brain-computer interaction must involve direct neural signal reading.
CIPHER Category
Primary – Hacks: NeuGaze is a classic hack that repurposes existing webcam and AI technology for new use, substituting conventional neural inputs with visual proxies. By using a webcam to emulate a BCI, it circumvents the cost and complexity of a traditional BCI.
Secondary – Contradictions: It creates a contradiction by decoupling the brain-computer interface from direct neural signal acquisition. Where BCI typically means EEG or implants, NeuGaze inverts this expectation; a non-neural interface achieves similar outcomes. This unusual pairing of facial UX and neural BCI domains challenges the assumption that sophisticated sensors are required for effective BCI control.
Implications
Short-term (2025–2028): NeuGaze and similar AI-powered systems could rapidly expand access to assistive technology, allowing users, especially those with motor impairments, to control computers using webcams and AI-driven intent recognition. Open-source projects may foster DIY adoption, while startups integrate AI-based computer vision into affordable accessibility tools, bypassing the need for traditional neural hardware.
Mid-term (2030s): As AI models for facial expression and gaze detection advance, hybrid interfaces may combine AI-driven visual proxies with other biosignals, refining intent detection for broader applications in gaming, AR/VR, and industrial hands-free control. The growing sophistication of AI could further blur the distinction between classic BCI’s and AI-mediated intent interfaces.
Cultural Shifts: The rise of AI-powered BCI-lite solutions may spark debate within neurotech and accessibility communities about what constitutes an actual BCI. Funding and attention could shift from invasive, neural-signal-based BCI’s toward pragmatic, AI-driven solutions that prioritise accessibility and ease of use.
Broader Trend: NeuGaze exemplifies a wider movement in human-computer interaction in which AI leverages ubiquitous hardware (like webcams) to unlock new forms of control and communication. This mirrors trends in AI-powered voice assistants and consumer-grade eye tracking. This signals a democratisation of advanced interface technology, driven by advances in artificial intelligence.
Estimated Timeline to Impact
2025–2028: NeuGaze’s open-source prototype sparks niche adoption in assistive technology and gaming communities. AI-driven computer vision models could refine gaze and facial expression tracking. Startups integrate similar systems into low-cost accessibility tools, targeting motor-impaired users and hands-free gaming peripherals.
Post-2028: Scalability depends on overcoming environmental limitations (e.g., lighting, camera quality) and integrating with mainstream AI frameworks (e.g., Meta’s Llama-driven HCI tools). Hybrid interfaces may blend NeuGaze’s visual proxies with wearables or muscle sensors, competing with invasive BCI’s. Full mainstream adoption hinges on AI’s ability to generalise intent decoding across diverse users and contexts, with BCI-lite solutions likely complementing (rather than replacing) neural implants by 2030.
Signal Type
Technological/Social: A grassroots innovation in human-computer interaction. It is a weak signal of a fringe technological hack with social implications (broader accessibility and user empowerment) rather than a corporate or mainstream release.
Confidence Level
Moderate: The technical feasibility is demonstrated (with performance comparable to conventional inputs in a single-user test), and it addresses a real need, so it is likely to be adopted in niche contexts. However, its impact depends on community uptake, further refinement, and validation with diverse users.
Signal Strength
Low-to-Medium: This single prototype from one research group (a lone outlier) makes it a weak signal. However, it aligns with a broader pattern of creative accessibility tech and the ubiquity of cameras, giving it some underlying strength if others replicate the idea.
Analyst Intuition
This signal suggests that the path to brain-machine synergy might not be linear. We may not need perfect neural implants to achieve useful mind-machine fusion. Such low-tech BCI’s could gain a cult following and spur a new category of interfaces. It is a reminder that “good enough” hacks can sometimes outpace expensive high-tech solutions in the near term, a factor futurists should watch when considering how disruptive tech might first manifest.
3. Turning Hardware Flaws into Features: Memristor “Synapses” Boost AI
Source - arXiv preprint
April 21, 2025
Signal Description
Engineers have demonstrated a novel approach to AI hardware using memristors – electrical components that behave like brain synapses by remembering resistance levels (even with power off) and performing in-memory computation. In a new study, a team tackled the notorious variability in memristor behaviour, a factor long seen as a drawback (since manufacturing differences and noise can cause inconsistent outputs). Instead of fighting these “flaws,” the researchers devised layer ensemble averaging to leverage the variability to improve neural network robustness.
In essence, by averaging across layers of slightly different memristor-based neurons, the system turns randomness into a form of regularisation or fault tolerance, improving AI performance rather than degrading it. This is a significant weak signal in the domain of biologically grounded architectures: it mimics the redundancy and plasticity of biological brains, where inconsistency at the micro-level can lead to stability at the macro-level.
The development is surprising because it challenges the engineering dogma that hardware must be as uniform and error-free as possible; here, “defects” become a resource. It’s a rare case of turning a technological limitation into an advantage, hinting at a conceptual reversal in design philosophy for AI chips.
CIPHER Category
Primary—Hacks: The researchers employed a hack-minded approach, repurposing what was seen as a hardware bug (stochastic device defects) into a feature for AI optimisation. This is an inventive hack at the intersection of hardware and algorithm, cleverly using an unintended property of memristors.
Secondary – Extremes: This work pushes the boundaries of neuromorphic computing by pursuing a novel goal, harnessing noise, an extreme departure from traditional digital precision. It explores a different route for AI hardware (analogue, stochastic computing) at the edge of mainstream practice. (Justification: Embracing variability to such an extent is highly unusual and pushes toward an extreme paradigm where perfect reproducibility isn’t required for reliable computing, much like brain tissue.)
Implications
Short-term (2025–2028): Memristor-based neuromorphic chips, long dismissed for their stochastic behaviour, find a new role—specifically in energy-constrained, edge-based AI systems. Prototypes might appear in research labs and specialised devices, capable of learning and adapting on the fly, embedded within sensors, wearables, or drones. These systems operate without the rigid reproducibility demanded by traditional chips. In this emerging niche, tolerance for imprecision becomes a source of robustness, allowing models to run in chaotic, real-world conditions that would otherwise crash precision-reliant architectures.
Mid-term (2030’s): These brain-like chips start to proliferate into commercial and strategic domains—not because they are faster or more powerful in raw terms, but because they fail better. In radiation-rich environments, such as space or military theatres, or in low-power consumer hardware where cloud connectivity isn’t guaranteed, memristor systems offer an unusual proposition: adaptive learning that doesn’t demand perfection. Devices become more autonomous, context-aware, and more robust to failure—qualities increasingly prized as AI shifts to the edge. Yet, this shift doesn’t come without friction. Traditional software stacks, built on the assumption of deterministic computation, begin to creak. Debugging becomes more probabilistic. Toolchains fragment. The old paradigm of predictability is challenged by a new norm: “systems that thrive in noise.”
Cultural Shifts: The metaphor of the computer has been mechanistic, digital, discrete, and controlled. This signal hints at its erosion. In its place, a more organic metaphor takes root: the brain as a benchmark, not just for software models but hardware behaviour. Engineers, once trained to eliminate irregularity, begin to explore how irregularity might serve them. A philosophical pivot emerges: an acceptance that intelligence—machine or otherwise—might require a degree of ambiguity, imprecision, or “messiness.” That challenges deep assumptions in engineering: that more precision always yields better outcomes; that control must be absolute; that the system must always behave the same way twice.
Broader Trend: At a societal level, it could also reshape AI safety and verification debates. What does it mean to trust a system that doesn’t behave identically every time? And how do we govern intelligence that emerges not from code alone, but from the substrate it runs on?
Estimated Timeline to Impact
2025–2028: Memristor-based AI chips are still experimental, but progress is steady. Incorporating this variability-averaging method might yield prototype neuromorphic accelerators in the 2–4 year range for research applications.
Post-2028: Commercial impact (e.g. AI edge devices or data centre accelerators using memristors) could follow in the late 2020s if reliability and scaling issues are resolved. Full adoption, where this approach is mainstream, could be 5 to 10+ years out, aligning with Horizon 3 futures where classical transistor scaling has plateaued and alternatives are necessary.
Signal Type
Technological: A signal in advanced hardware R&D. It’s primarily a research/innovation signal indicating a possible future path for AI computation (crossing hardware and algorithm domains).
Confidence Level
Moderate: The study comes from credible academic sources and reflects a growing interest in neuromorphic computing. Confidence is bolstered by multiple concurrent efforts to use memristors and other analog devices for AI. However, realising this in deployed systems requires overcoming significant engineering hurdles (scalability, consistency of “controlled chaos”), so there’s uncertainty about how far this approach will go.
Signal Strength
Medium: There is a small but notable community pursuing memristor/analogue AI, and this breakthrough aligns with their needs, giving the signal some momentum beyond a one-off idea. Still, compared to the dominant digital AI hardware paradigm, it’s a niche (for now). The fact that Nature Electronics and other high-profile outlets publish in this area suggests increasing validation of the concept.
Analyst Intuition
This signal resonates with an intuition that future AI might require breaking with “neat and tidy” computing traditions. Just as evolution found ways to make unreliable components (neurons) form reliable minds, our machines might need a similar embrace of imperfection. Our intuition is that this is a glimpse of a post-Moore’s Law future where leveraging new physics (analogue memory, probabilistic bits) is not just a gimmick but a necessity for continued AI progress. It could quietly upend assumptions in computer engineering, and those organisations that learn and adapt could leap ahead in the AI race.
4. Engineered Illusions: Fooling AI Agents with Fake Realities
Source - arXiv preprint (L. Medici, S. LaValle et al.)
April 25, 2025
Signal Description
A recent robotics paper introduces an almost science-fiction concept: controlling one AI agent by feeding it an illusion of reality. In this setup, a “receiver” agent (e.g., a robot) navigates using what it believes are signals from the environment (like radio beacons). A second agent, the “producer,” dynamically alters those signals to mislead the receiver’s self-localisation.
The result: the receiver thinks it has travelled to its goal, while the producer has covertly guided it somewhere else entirely. The robot is fully controlled by the producer’s signal illusions, yet remains confident that it has achieved its objective.
This is a weak signal of a new kind of adversarial, agent-on-agent interaction that goes beyond typical spoofing or hacking. It formalises “navigation illusions” as a controllable phenomenon, complete with mathematical models to ensure the receiver doesn’t notice the deception.
This development is novel and challenging in multiple ways: It underscores a hidden contradiction in AI autonomy (an agent can be 100% confident and 100% wrong simultaneously) and highlights an unexpected vulnerability; even perfectly functioning AI agents can be led astray by manipulating their perceived reality. It is both an extreme edge case and a significant conceptual shift, suggesting that mastering perception hacking could be as crucial as traditional control in future complex environments.
CIPHER Category
Primary – Contradictions: The core of this signal is a contradiction. The receiver agent’s perceived state and the real state are decoupled. Typically, an AI’s actions and outcomes are coupled (moving to X results in being at X), but here, that natural coupling is broken by design. This event/action contradicts previous behaviour patterns; the agent succeeds in its internal goal loop while failing in reality, a glaring contradiction regarding autonomy.
Secondary – Hacks: The producer’s strategy is effectively a hack on the agent’s sensors, repurposing the environment signals in a way they were never intended to be used (to misguide rather than guide). It exploits the receiver’s assumptions to “reprogram” its behaviour externally, a clever hack demonstrating how one system can creatively subvert another’s inputs.
Implications
Short-term (2025–2028):
Security & Safety:
Autonomous systems (drones, warehouse robots) face novel attack vectors where adversaries manipulate environmental inputs (e.g., spoofed GPS, fake RFID signals) to induce "self-delusion" in AI.
Emergence of AI hallucination tests to detect perceptual inconsistencies, akin to cybersecurity penetration testing.
Research Priorities:
Cross-verification protocols for sensor data (e.g., fusing lidar with blockchain-verified geospatial data).
Controlled illusion training: Using engineered deception in simulations to stress-test robot decision-making, similar to flight simulator failures.
Mid-term (2030s):
Regulatory & Military Shifts:
Mandatory perception-integrity safeguards for autonomous vehicles and industrial robots (EU/US/China).
Military R&D into illusion-based warfare: Hijacking enemy drones via deceptive signals vs. countermeasures like "reality anchors" (e.g., quantum-timestamped GPS).
AI Cognition Advances:
Meta-reasoning frameworks: Agents intermittently question input validity, using probabilistic models to weigh conflicting data.
Post-symbolic reasoning: AI moves beyond rigid data-driven logic to handle perceptual uncertainty (e.g., "What if my sensors are lying?").
Cultural Shifts:
From Trust to Skepticism:
Industry abandons the assumption that "well-trained AI + accurate sensors = reliable outcomes."
Public debates on benign deception ethics: Is it acceptable to trick rescue robots into avoiding danger zones?
Redefining Autonomy:
"True" autonomy is redefined to include self-doubt and external reality checks, mirroring human critical thinking.
Broader Trend:
Adversarial Resilience:
Rise of perception hacking as a discipline, paralleling the evolution of cybersecurity. Systems are designed assuming environments will be hostile.
Convergence with deepfake detection, blockchain oracle security, and other "reality assurance" fields.
Mental Models for AI:
Agents integrate internal representations of uncertainty and adversarial observers ("What if a producer is manipulating me?"), paving the way for AI theory of mind.
Estimated Timeline to Impact
2025–2028: Expect academic exploration in simulation and maybe simple real-world tests (e.g., spoofing a warehouse robot with fake GPS/Bluetooth signals). Awareness of this vulnerability could influence standards for autonomous vehicles and drones within 5 years (adding authentication to beacons or cross-checks).
Post-2028: Awareness of this vulnerability could influence standards for autonomous vehicles and drones within 5 years (adding authentication to beacons or cross-checks). As a tool, engineered illusions might become a standard part of testing autonomous AI in the late 2020s. For truly autonomous agents to reliably detect and handle such illusions (a Horizon 3 maturity), it may take a decade of research in agent cognition and robust perception.
Signal Type
Technological/Conceptual: A research signal highlighting a conceptual risk (and tool) in AI systems. It’s not a market or consumer trend, but a weak signal in the research community that could foreshadow future cybersecurity and AI ethics issues.
Confidence Level
Low to Moderate: The concept is clearly demonstrated in theory, and it aligns with known issues of sensor spoofing (e.g., GPS spoofing is already real). However, its significance as a paradigm shift is speculative, it’s uncertain how often this scenario will matter in practice. The confidence is higher that some form of perception attack will be relevant (that’s already known in adversarial ML), but lower that this precise “illusion optimal control” formulation will see widespread application.
Signal Strength
Low: This is an isolated theoretical signal at present. Few are discussing “navigation illusions” specifically, making it a faint but intriguing weak signal. It does, however, intersect with growing concerns over AI safety and adversarial environments, which gives it some supportive context. As autonomous systems proliferate, related incidents (even if not as elaborately engineered as this concept) may occur, which would quickly strengthen this signal’s importance.
Analyst Intuition
There’s an intuitive unease this signal evokes; a sense that the reality we present to AIs can become a tool for control. Our intuition is that as AI agents become more prevalent, psychological manipulation of AIs (for lack of a better term) will become a field of study: essentially tricking AI “minds” analogous to how magicians trick humans. It underscores that advanced AI may need a form of common sense or suspicion about its inputs. In a way, this weak signal might be hinting that true autonomy will require not just intelligence, but a form of wisdom to know when things aren’t what they seem.
5. Blueprint for a Brain-like AI: Neuroscience-Inspired AGI Architecture
Source - arXiv preprint (R. Gupta et al.)
April 30, 2025
Signal Description
A team of independent researchers released a comprehensive “Personalised AGI” architecture proposal, drawing directly from neuroscience principles. The paper argues that scaling deep learning alone won’t achieve human like AGI, advocating instead for architectures integrating:
Hebbian learning - associative “fire together, wire together” dynamics
Synaptic pruning - dynamic resource optimisation for edge devices
Dual-memory systems - Hippocampal fast learning + cortical slow consolidation
Sparse coding - energy-efficient distributed representations.
The design enables lifelong on-device learning without catastrophic forgetting, targeting personal robots or smartphones. This challenges the cloud-centric “bigger is better” paradigm, framing AGI as an architectural/systems problem rather than purely a scaling challenge.
CIPHER Category
Primary – Practices: This proposal, if it gains traction, could spur new practices in AI development, moving away from the dominant paradigm. It questions commonly held beliefs (“scaling is all we need”) and introduces a co-evolution of ideas from neuroscience into AI practice. It threatens the existing mindset and incumbent cloud-AI business models by suggesting many current AI achievements are hitting a wall without these changes.
Secondary – Rarities: As a theoretical framework combining diverse principles, it’s an outlier concept. There are few examples of end-to-end AGI blueprints that integrate so many bio-inspired elements. It seems out of place amid the incremental improvements of mainstream AI, marking it as a rarity in the discourse that contrasts with incremental industry trends.
Implications
Short-term (2025–2028): We can expect to see early research prototypes that implement parts of this neuroscience-inspired architecture. These might include dual-memory systems that separate fast, adaptive learning from slower, more stable knowledge, or algorithms that use synaptic pruning to maintain efficiency and prevent forgetting. Such features could appear first in academic experiments or in specialised applications-like adaptive robots or personal AI assistants that learn directly on devices. There’s likely to be growing interest in hybrid models that blend today’s transformer approaches with brain-inspired mechanisms, especially where continual adaptation and personalisation are important. Privacy-preserving AI that learns locally (without sending data to the cloud) may gain traction, particularly in sectors like healthcare or finance where data sovereignty is a priority. At the same time, advances in neuromorphic hardware may make these approaches more practical by enabling energy-efficient, on-device learning.
Mid-term (2030s): If these ideas prove successful, they could set the foundation for a new generation of AI systems that learn and adapt over their entire lifespans. This would mark a shift from today’s cloud-trained, static models to dynamic, self-optimising systems that live on the edge. Applications could include long-term AI companion robots or assistants that genuinely evolve with their users, or resilient AI agents that operate independently in remote or challenging environments. As these systems become more widespread, we will need new ways to evaluate and govern them, since traditional benchmarks won’t capture the ongoing, individualised learning of each AI. This could also drive more direct collaboration between AI engineers and cognitive scientists, as understanding and modeling human-like learning becomes central to progress.
Cultural Shifts: As personalised, evolving AIs become more common, society’s relationship with intelligent systems may change. People could form deeper emotional bonds with AI companions that grow and adapt alongside them, raising new questions about trust, identity, and the boundaries between human and machine. The move toward on-device learning may also shift public perceptions of privacy, as personal data stays local rather than being collected at scale. At the same time, as AIs begin to reflect local cultures and user preferences, we may see the emergence of regionally distinct “AI cultures” and new debates about who owns an AI’s evolving knowledge and personality.
Broader Trend: This signal is part of a wider movement away from brute-force scaling and toward more sustainable, adaptable, and human-aligned AI. As the limits of large transformer models become clearer, the field is likely to explore more bio-inspired and cognitive approaches-focusing on systems that are not just bigger, but smarter and more resilient. This could mark a return to AI’s interdisciplinary roots, bridging neuroscience, cognitive science, and computer science in pursuit of machines that can learn, adapt, and perhaps even understand in ways that mirror human intelligence. The long-term result could be a new generation of AI that is not a fixed product, but a lifelong learning companion-growing and changing alongside us.
Estimated Timeline to Impact
2025–2028: As a theoretical proposal, this is at the starting line. Within 2-3 years, we might see research prototypes implementing pieces of this e.g. dual-memory networks, automated synaptic pruning in neural nets for continual learning.
Post-2028: Widespread adoption of a full architecture inspired by this could be 5-10 years out, if evidence shows it outperforms current approaches on key benchmarks or in real-world adaptability. The personalised edge AGI vision likely aligns with late-2020s to 2030s technology, assuming hardware (for on-device training) and software catch up. In terms of Horizon 3, this is aiming for that true AGI timeframe – possibly a decade or more away for full realisation, but weak signals like this guide the preparatory research now.
Signal Type
Technological/Conceptual: A visionary framework signal in the research literature. It’s less about a single experiment and more about a roadmap for future AI, indicating a possible direction and paradigm change in AI R&D.
Confidence Level
Low: This is largely speculative (albeit informed by neuroscience). There is high uncertainty whether this specific combination of ideas will yield the desired AGI properties. However, the confidence that “scaling alone is insufficient” is growing in some circles, and alternative architectures will be explored. This signal’s value is more in highlighting a potential path than in guaranteed outcomes. It’s an early weak signal that the community are starting to think beyond transformers and massive data.
Signal Strength
Low: Few mainstream AI researchers have embraced such holistic brain based designs yet. Related work in neurosymbolic AI, continual learning, and cognitive architectures are addressing parts of the problem. If those pieces show progress, the call for an integrated solution like this will strengthen. For now, it’s a fringe viewpoint, making it a faint but noteworthy signal.
Analyst Intuition
This signal taps into a long-standing intuition in the AI community that has been overshadowed by recent success of scale; the intuition that intelligence is as much about architecture and self-organisation as it is about brute force. Our gut feeling is that as we hit diminishing returns on giant models, these ideas will enjoy a renaissance. This particular proposal might not be “the one,” but it suggests an inspiring vision: instead of training monolithic models, we grow AI. Somewhere in these brain-inspired principles might lie the key to AI that can truly think, adapt, and relate.