What new opportunities emerge from this confluence of trends?
- The total failure of Europe's AI and Robotics sectors
- China’s meteoric rise in AI, robotics, and open-source leadership
- The USA' shift to anti-european, protectionist, vulgar policies
What new opportunities emerge from this confluence of trends?
ChatGPT o3-mini-high for most use-cases it's a horrible model, despite the propaganda.
On-demand Interactive Education
I remember struggling to grasp complex numbers as a teen. My teacher, always short on time and impatient, couldn’t help. Now? Just ask an LLM, ‘Create an interactive tool to teach me complex numbers,’ chat for a few minutes, and boom—concept mastered. Total game-changer.
And of course this works for any topic, no matter how simple or complex.
Time is said to have only one dimension, and space to have three dimensions. ... The mathematical quaternion partakes of both these elements; in technical language it may be said to be "time plus space", or "space plus time": And in this sense it has, or at least involves a reference to, four dimensions. ... And how the One of Time, of Space the Three, Might in the Chain of Symbols girdled be. — William Rowan Hamilton (c. 1853)
Hyperdimensional Computing (HDC) Playground
Spent a few hours this weekend learning about Hyperdimensional Computing (HDC), inspired by a fun chat with @mwilcox & friends. Built a IPython notebook with toy examples—like an HDC Autoencoder for ImageNet—to learn by tinkering: https://github.com/samim23/hyperdimensional_computing_playground
We're entering an era where LLM agents can quickly clone any SaaS application, eroding traditional defenses and economic moats. This will accelerate the 'open-source everything' movement and have a significant impact on society.
BUD-E 1.0 - Open Source browser-based Voice Assistants, that work out-of-the-box with self-hosted and third party APIs and is saving the user data locally in the browser.
The sophistication of real-time video classification & analysis models these days is amazing. You can run these open source, local, on cheap inference hardware. Transformative (screenshot: Moondream 2B & OLLama)
As artificial intelligence advances at breakneck speed, we must prioritize the evolution of human intelligence. Education—particularly for children—is more critical than ever, yet it remains anchored in outdated models from the last century.
Discussing 'human education reform' with LLMs is an enlightening exercise. Asking for a 'comprehensive homeschooling course' brings up some thought-provoking ideas about the future.
The LLM prompt 'Explain this to me like I’m 14: [your content]' is criminally underrated. This kind of 'context2context translation'—bridging complex ideas with simple explanations (and vice versa)—has truly world-shaking potential.
We present VortexNet, a novel neural network architecture that leverages principles from fluid dynamics to address fundamental challenges in temporal coherence and multi-scale information processing. Drawing inspiration from von Karman vortex streets, coupled oscillator systems, and energy cascades in turbulent flows, our model introduces complex-valued state spaces and phase coupling mechanisms that enable emergent computational properties. By incorporating a modified Navier–Stokes formulation—similar to yet distinct from Physics-Informed Neural Networks (PINNs) and other PDE-based neural frameworks—we implement an implicit form of attention through physical principles. This reframing of neural layers as self-organizing vortex fields naturally addresses issues such as vanishing gradients and long-range dependencies by harnessing vortex interactions and resonant coupling. Initial experiments and theoretical analyses suggest that VortexNet supports integration of information across multiple temporal and spatial scales in a robust and adaptable manner compared to standard deep architectures.
Traditional neural networks, despite their success, often struggle with temporal coherence and multi-scale information processing. Transformers and recurrent networks can tackle some of these challenges but might suffer from prohibitive computational complexity or vanishing gradient issues when dealing with long sequences. Drawing inspiration from fluid dynamics phenomena—such as von Karman vortex streets, energy cascades in turbulent flows, and viscous dissipation—we propose VortexNet, a neural architecture that reframes information flow in terms of vortex formation and phase-coupled oscillations.
Our approach builds upon and diverges from existing PDE-based neural frameworks, including PINNs (Physics-Informed Neural Networks), Neural ODEs, and more recent Neural Operators (e.g., Fourier Neural Operator). While many of these works aim to learn solutions to PDEs given physical constraints, VortexNet internalizes PDE dynamics to drive multi-scale feature propagation within a neural network context. It is also conceptually related to oscillator-based and reservoir-computing paradigms—where dynamical systems are leveraged for complex spatiotemporal processing—but introduces a core emphasis on vortex interactions and implicit attention fields.
Interestingly, this echoes the early example of the MONIAC and earlier analog computers that harnessed fluid-inspired mechanisms. Similarly, recent innovations like microfluidic chips and neural networks highlight how physical systems can inspire new computational paradigms. While fundamentally different in its goals, VortexNet demonstrates how physical analogies can continue to inform and enrich modern computation architectures.
Core Contributions:
The network comprises interleaved “vortex layers” that generate
counter-rotating activation fields. Each layer operates on a
complex-valued state space S(z,t)
, where
z
represents the layer depth and t
the temporal
dimension. Inspired by, yet distinct from PINNs, we incorporate a
modified Navier–Stokes formulation for the evolution of the activation:
∂S/∂t = ν∇²S - (S·∇)S + F(x)
Here, ν
is a learnable viscosity parameter, and
F(x)
represents input forcing. Importantly, the PDE
perspective is not merely for enforcing physical constraints but for
orchestrating oscillatory and vortex-based dynamics in the hidden layers.
A hierarchical resonance mechanism is introduced via the dimensionless Strouhal-Neural number (Sn):
Sn = (f·D)/A = φ(ω,λ)
In fluid dynamics, the Strouhal number is central to describing vortex shedding phenomena. We reinterpret these variables in a neural context:
By tuning these parameters, one can manage how quickly and strongly oscillations propagate through the network. The Strouhal-Neural number thus serves as a guiding metric for emergent rhythmic activity and multi-scale coordination across layers.
We implement a novel homeostatic damping mechanism based on the local Lyapunov exponent spectrum, preventing both excessive dissipation and unstable amplification of activations. The damping is applied as:
γ(t) = α·tanh(β·||∇L||) + γ₀
Here, ||∇L||
is the magnitude of the gradient of the loss
function with respect to the vortex layer outputs, α
and
β
are hyperparameters controlling the nonlinearity of the
damping function, and γ₀
is a baseline damping offset. This
dynamic damping helps keep the network in a regime where oscillations are
neither trivial nor diverging, aligning with the stable/chaotic transition
observed in many physical systems.
To integrate the modified Navier–Stokes equation into a neural pipeline,
VortexNet discretizes S(z,t)
over time steps and spatial/channel
dimensions. A lightweight PDE solver is unrolled within the computational
graph:
S
.
For 1D or 2D tasks, finite differences with
periodic or reflective boundary conditions can be used to
approximate spatial derivatives.
O(T · M)
or O(T · M log M)
, where
T
is the unrolled time dimension and M
is the
spatial/channel resolution. This can sometimes be more efficient than
explicit O(n²)
attention when sequences grow large.
ν
or f
are large, the network will learn to
self-regulate amplitude growth via γ(t)
.
While traditional attention mechanisms in neural networks rely on explicit computation of similarity scores between elements, VortexNet’s vortex dynamics offer an implicit form of attention grounded in physical principles. This reimagining yields parallels and distinctions from standard attention layers.
In standard attention, weights are computed via:
A(Q,K,V) = softmax(QK^T / √d) V
In contrast, VortexNet’s attention emerges via vortex interactions within
S(z,t)
:
A_vortex(S) = ∇ × (S·∇)S
When two vortices come into proximity, they influence each other’s trajectories through the coupled terms in the Navier–Stokes equation. This physically motivated attention requires no explicit pairwise comparison; rotational fields drive the emergent “focus” effect.
Transformers typically employ multi-head attention, where each head extracts different relational patterns. Analogously, VortexNet’s counter-rotating vortex pairs create multiple channels of information flow, with each pair focusing on different frequency components of the input, guided by their Strouhal-Neural numbers.
Whereas transformer-style attention has O(n²)
complexity for
sequence length n
, VortexNet integrates interactions through:
ν∇²S
φ(ω, λ)
These multi-scale interactions can reduce computational overhead, as they are driven by PDE-based operators rather than explicit pairwise calculations.
The meta-stable states supported by vortex dynamics serve as continuous memory, analogous to key-value stores in standard attention architectures. However, rather than explicitly storing data, the network’s memory is governed by evolving vortex fields, capturing time-varying context in a continuous dynamical system.
Dimensionless analysis and chaotic dynamics provide a valuable lens for understanding VortexNet’s behavior:
||∇L||
into our adaptive
damping, we effectively constrain the system at the “edge of chaos,”
balancing expressivity (rich oscillations) with stability (bounded
gradients).
Reframing neural computation in terms of self-organizing fluid dynamic systems allows VortexNet to leverage well-studied PDE behaviors (e.g., vortex shedding, damping, boundary layers), which aligns with but goes beyond typical PDE-based or physics-informed approaches.
O(n)
or
O(n log n)
scaling methods, and hardware acceleration (e.g.,
GPU or TPU). Open-sourcing such solvers could catalyze broader exploration
of vortex-based networks.
ν
and λ
using local Lyapunov exponents, ensuring that VortexNet remains near a critical regime for maximal expressivity.
We have introduced VortexNet, a neural architecture grounded in fluid dynamics, emphasizing vortex interactions and oscillatory phase coupling to address challenges in multi-scale and long-range information processing. By bridging concepts from partial differential equations, dimensionless analysis, and adaptive damping, VortexNet provides a unique avenue for implicit attention, improved gradient flow, and emergent attractor dynamics. While initial experiments are promising, future investigations and detailed theoretical analyses will further clarify the potential of vortex-based neural computation. We believe this fluid-dynamics-inspired approach can open new frontiers in both fundamental deep learning research and practical high-dimensional sequence modeling.
This repository contains toy implementations of some of the concepts introduced in this research.