Toggle light / dark theme

Q-Day: Catastrophic For Businesses Ignoring Quantum-Resistant Encryption

#Quantum #CyberSecurity


Quantum computing is not merely a frontier of innovation; it is a countdown. Q-Day is the pivotal moment when scalable quantum computers undermine the cryptographic underpinnings of our digital realm. It is approaching more rapidly than many comprehend.

For corporations and governmental entities reliant on outdated encryption methods, Q-Day will not herald a smooth transition; it may signify a digital catastrophe.

Comprehending Q-Day: The Quantum Reckoning

Q-Day arrives when quantum machines using Shor’s algorithm can dismantle public-key encryption within minutes—a task that classical supercomputers would require billions of years to accomplish.

Brain inspired machines are better at math than expected

Neuromorphic computers modeled after the human brain can now solve the complex equations behind physics simulations — something once thought possible only with energy-hungry supercomputers. The breakthrough could lead to powerful, low-energy supercomputers while revealing new secrets about how our brains process information.

Why Time Doesn’t Exist | Leonard Susskind

We experience time as something that flows. Seconds pass. Moments disappear. The future becomes the present and then turns into the past.

But modern physics does not describe time this way.

In this video, we explore why time — as we intuitively understand it — may not exist at the fundamental level of reality.

Drawing on ideas associated with Leonard Susskind, this documentary examines how relativity and quantum physics challenge the idea of a flowing temporal river. Einstein’s theory removes the notion of a universal present. There is no global “now” that sweeps across the universe.

Without a universal present, the idea of time flowing becomes difficult to define physically.

In the relativistic picture, spacetime is a four-dimensional structure. Events are not created moment by moment. They are embedded in geometry. The equations of physics do not contain a moving present. They describe relations between events.

Silicon metasurfaces boost optical image processing with passive intensity-based filtering

Of the many feats achieved by artificial intelligence (AI), the ability to process images quickly and accurately has had an especially impressive impact on science and technology. Now, researchers in the McKelvey School of Engineering at Washington University in St. Louis have found a way to improve the efficiency and capability of machine vision and AI diagnostics using optical systems instead of traditional digital algorithms.

Mark Lawrence, an assistant professor of electrical and systems engineering, and doctoral student Bo Zhao developed this approach to achieve efficient processing performance without high energy consumption. Typically, all-optical image processing is highly constrained by the lack of nonlinearity, which usually requires high light intensities or external power, but the new method uses nanostructured films called metasurfaces to enhance optical nonlinearity passively, making it practical for everyday use.

Their work shows the ability to filter images based on light intensity, potentially making all-optical neural networks more powerful without using additional energy. Results of the research were published online in Nano Letters on Jan. 21, 2026.

AI method accelerates liquid simulations by learning fundamental physical relationships

Researchers at the University of Bayreuth have developed a method using artificial intelligence that can significantly speed up the calculation of liquid properties. The AI approach predicts the chemical potential—an indispensable quantity for describing liquids in thermodynamic equilibrium. The researchers present their findings in a new study published in Physical Review Letters.

Many common AI methods are based on the principle of supervised machine learning: a model—for instance, a neural network—is specifically trained to predict a particular target quantity directly. One example that illustrates this approach is image recognition, where the AI system is shown numerous images in which it is known whether or not a cat is depicted. On this basis, the system learns to identify cats in new, previously unseen images.

“However, such a direct approach is difficult in the case of the chemical potential, because determining it usually requires computationally expensive algorithms,” says Prof. Dr. Matthias Schmidt, Chair of Theoretical Physics II at the University of Bayreuth. He and his research associate Dr. Florian Sammüller address this challenge with their newly developed AI method. It is based on a neural network that incorporates the theoretical structure of liquids—and more generally, of soft matter—allowing it to predict their properties with great accuracy.

JUST RECORDED: Elon Musk Announces MAJOR Company Shakeup

Elon Musk Announces MAJOR Company Changes as XAI/SpaceX ## Elon Musk is announcing significant changes and advancements across his companies, primarily focused on developing and integrating artificial intelligence (AI) to drive innovation, productivity, and growth ## ## Questions to inspire discussion.

Product Development & Market Position.

🚀 Q: How fast did xAI achieve market leadership compared to competitors?

A: xAI reached number one in voice, image, video generation, and forecasting with the Grok 4.20 model in just 2.5 years, outpacing competitors who are 5–20 years old with larger teams and more resources.

📱 Q: What scale did xAI’s everything app reach in one year?

A: In one year, xAI went from nothing to 2M Teslas using Grok, deployed a Grok voice agent API, and built an everything app handling legal questions, slide decks, and puzzles.

AI Discovers Geophysical Turbulence Model

One of the biggest challenges in climate science and weather forecasting is predicting the effects of turbulence at spatial scales smaller than the resolution of atmospheric and oceanic models. Simplified sets of equations known as closure models can predict the statistics of this “subgrid” turbulence, but existing closure models are prone to dynamic instabilities or fail to account for rare, high-energy events. Now Karan Jakhar at the University of Chicago and his colleagues have applied an artificial-intelligence (AI) tool to data generated by numerical simulations to uncover an improved closure model [1]. The finding, which the researchers subsequently verified with a mathematical derivation, offers insights into the multiscale dynamics of atmospheric and oceanic turbulence. It also illustrates that AI-generated prediction models need not be “black boxes,” but can be transparent and understandable.

The team trained their AI—a so-called equation-discovery tool—on “ground-truth” data that they generated by performing computationally costly, high-resolution numerical simulations of several 2D turbulent flows. The AI selected the smallest number of mathematical functions (from a library of 930 possibilities) that, in combination, could reproduce the statistical properties of the dataset. Previously, researchers have used this approach to reproduce only the spatial structure of small-scale turbulent flows. The tool used by Jakhar and collaborators filtered for functions that correctly represented not only the structure but also energy transfer between spatial scales.

They tested the performance of the resulting closure model by applying it to a computationally practical, low-resolution version of the dataset. The model accurately captured the detailed flow structures and energy transfers that appeared in the high-resolution ground-truth data. It also predicted statistically rare conditions corresponding to extreme-weather events, which have challenged previous models.

A long-lost Soviet spacecraft: AI could finally solve the mystery of Luna 9’s landing site

Using an advanced machine-learning algorithm, researchers in the UK and Japan have identified several promising candidate locations for the long-lost landing site of the Soviet Luna 9 spacecraft. Publishing their results in npj Space Exploration, the team, led by Lewis Pinault at University College London, hope that their model’s predictions could soon be tested using new observations from India’s Chandrayaan-2 orbiter.

In 1966, the USSR’s Luna 9 mission became the first human-made object to land safely on the moon’s surface and to transmit photographs from another celestial body. Compared with modern missions, the landing was dramatic: shortly before the main spacecraft itself struck the lunar surface, it deployed a 58-cm-wide, roughly 100-kg spherical landing capsule from above, then maneuvered away to crash at a safe distance.

Equipped with inflatable shock absorbers, the capsule bounced several times before coming to rest, stabilizing itself by unfurling four petal-like panels. Although Luna 9 operated for just three days, it transmitted a wealth of valuable data back to Earth, helping to inspire confidence in crewed space exploration, that would see humanity take its first steps on the moon just three years later.

Seeing the whole from a part: Revealing hidden turbulent structures from limited observations and equations

The irregular, swirling motion of fluids we call turbulence can be found everywhere, from stirring in a teacup to currents in the planetary atmosphere. This phenomenon is governed by the Navier-Stokes equations—a set of mathematical equations that describe how fluids move.

Despite being known for nearly two centuries, these equations still pose major challenges when it comes to making predictions. Turbulent flows are inherently chaotic, and tiny uncertainties can grow quickly over time.

In real-world situations, scientists can only observe part of a turbulent flow, usually its largest and slowest moving features. Thus, a long-standing question in fluid physics has been whether these partial observations are enough to reconstruct the full motion of the fluid.

How scientists are trying to use AI to unlock the human mind

Compared with conventional psychological models, which use simple math equations, Centaur did a far better job of predicting behavior. Accurate predictions of how humans respond in psychology experiments are valuable in and of themselves: For example, scientists could use Centaur to pilot their experiments on a computer before recruiting, and paying, human participants. In their paper, however, the researchers propose that Centaur could be more than just a prediction machine. By interrogating the mechanisms that allow Centaur to effectively replicate human behavior, they argue, scientists could develop new theories about the inner workings of the mind.

But some psychologists doubt whether Centaur can tell us much about the mind at all. Sure, it’s better than conventional psychological models at predicting how humans behave—but it also has a billion times more parameters. And just because a model behaves like a human on the outside doesn’t mean that it functions like one on the inside. Olivia Guest, an assistant professor of computational cognitive science at Radboud University in the Netherlands, compares Centaur to a calculator, which can effectively predict the response a math whiz will give when asked to add two numbers. “I don’t know what you would learn about human addition by studying a calculator,” she says.

Even if Centaur does capture something important about human psychology, scientists may struggle to extract any insight from the model’s millions of neurons. Though AI researchers are working hard to figure out how large language models work, they’ve barely managed to crack open the black box. Understanding an enormous neural-network model of the human mind may not prove much easier than understanding the thing itself.

/* */