LAHS senior Robert Strauss has been named a scholar in the Regeneron Science Talent Search. He and LAHS will receive a $2,000 award. Strauss is the only New Mexico student to qualify. He has also competed twice in the Society for Science’s International Science and Engineering Fair. Robert is the son of Charlie and Lynn Strauss. Courtesy photo
REGENERON NEWS RELEASE
Los Alamos High School senior Robert Strauss is among the 300 scholars named today in the Regeneron Science Talent Search, the nation’s oldest and most prestigious science and math competition for high school seniors.
Strauss submitted the project, Neuromorphic Computing: Simulating the Brain’s Visual Cortex for More Efficient Computation. His description of the project is featured below.
The 300 scholars and their schools will be awarded $2,000 each.
The Regeneron Science Talent Search scholars were selected from 1,804 applications received from 603 high schools across 46 states, Washington, D.C., Puerto Rico and eight other countries.
Scholars were chosen based on their exceptional research skills, commitment to academics, innovative thinking and promise as scientists as demonstrated through the submission of their original, independent research projects, essays and recommendation.
The 300 scholars hail from 185 American and international high schools in 37 states, China, Switzerland, and Singapore, including three homeschools.
The full list of scholars can be viewed here.
The Regeneron Science Talent Search provides students with a national stage to present original research and celebrates the hard work and novel discoveries of young scientists who are bringing a fresh perspective to significant global challenges.
This year, research projects cover topics from tracking countries’ progress on Sustainable Development Goals to the impact of states’ individual COVID-19 responses, and improving the tools used to diagnose Alzheimer’s to analyzing the effects of virtual learning on education.
“Amid an unprecedented and ongoing global health crisis, we are incredibly inspired to see such an extraordinary group of young leaders who are using the power of STEM to solve the world’s most intractable challenges,” Maya Ajmera said, President and CEO of Society for Science, Publisher of Science News and 1985 Science Talent Search alum. “The ingenuity and creativity that each one of these scholars possesses has shown just how much intellectual curiosity and passion can thrive, even in difficult times.”
“Congratulations to this year’s 300 Regeneron Science Talent Search scholars for their remarkable contributions and discoveries in the STEM field,” Christina Chan said, Senior Vice President, Corporate Communications & Citizenship at Regeneron. “We are honored to celebrate this new generation of problem solvers who have demonstrated the depth of their innovative thinking, commitment to continuous learning, and ability to tackle global challenges in creative ways.”
On Jan. 20, 40 of the 300 scholars will be named Regeneron Science Talent Search finalists. The finalists will then compete for more than $1.8 million in awards during a week-long competition taking place March 10-16.
BY ROBERT STRAUSS
In the science project I submitted to the Regeneron Science Talent Search, I built a neuromorphic simulation of tasks performed by the visual cortex. This was my junior year project for the International Science and Engineering Fair. Neuromorphic refers to a new kind of computer architecture that is completely unlike any digital computer. Rather than doing binary logic operations with gates and voltage levels, neuromorphic systems fire voltage spikes asynchronously between neurons to compute, which requires thousands of times less energy and has the advantage of being continuous and massively parallel. The underlying architecture superficially resembles an artificial neural net, the neural nets laypersons might be familiar with from the media, but it’s actually the inverse: Artificial Neural Nets (ANN) are just a mathematical simplification of neuromorphic systems inspired by the spiking layers of real biological neurons. The difference is that real neural nets don’t send floating point numbers as inputs to some math function, but instead communicate back and forth with timed signals. The timing of these spikes isn’t synchronized to some clock; a given neuron will fire with increasing probability as its membrane charges up (or loses charge) from spikes fired to it by other neurons. There’s lots of layers of these, and for things like your vision neighboring neurons are tightly connected, making them similar to the typical convolutional arrangements of artificial neural networks.
I trained my simulated leaky integrate and fire neuron mesh to process moving images of objects and to both classify the images by what category they belonged to and also to stabilize the moving object to a still image. That might sound like a routine task for an artificial neural net, but there’s a few neuromorphic twists. First, the moving images were not a series of frames from a video camera; instead each pixel is emitting spikes over time, with no (or effectively infinite) framerate, and the rate of spike emission is not the brightness of the pixel but the rate of change of the pixel intensity. If nothing moves, the output is no spikes. There’s no data. Additionally, these spikes are fired asynchronously and probabilistically, and one is trying to process these in real time, not by waiting for a large number of spikes to happen and recreate a “frame” of a pseudo-video. If your brain did it that way, the tiger would have already sunk its teeth into you before you had time to dodge. These are then processed not with a digital computer or a specialized analog computer that is designed by an expert knowing the problem’s solution. Instead it’s fed into a network of spiking neurons that must learn from scratch to perform the task by training, not a pre-programmed algorithm.
What’s intriguing about these from a computer science point of view is two things. The energy cost of a neural computation may be orders of magnitude– many many orders of magnitude– less than digital transistors (which is why your brain doesn’t melt when you watch TV, but your laptop overheats). They also have no clocks so nothing is waiting for anything else. This means they can compute in parallel not simply across a layer but all layers depth-wise are calculating at the same time. They can be far simpler to build, too: an electronic emulation of a biological neuron could be implemented with a single transistor changing state rarely, whereas in ANN there’s tens of thousands of transistors switching thousands of times to move data from memory through the multiplies and adds and nonlinear functions and back out for every single neuron in the ANN. So you can see the attraction of this for the future of computing.
The problem however, is there’s no way to program these. You can’t give this system a list of instructions like a normal computer. Not only are there no programming languages, but you can’t even think about how to write code. You could imagine trying to gang together huge numbers of these neurons to emulate a digital computer or to construct a specific algorithm but that would be swimming against the current. Instead, I decided to see if I could train these neurons as what they are: a network of neurons. This process is conceptually similar to training an artificial neural network with backpropagation: let the untrained system guess at answers and then use the good or bad answers at the output to reinforce how the neurons are connected. But there’s a couple big twists that make it much harder. First, the system is not deterministic, it’s probabilistic, and you can’t define the functions that get applied, just the connections and time delays between neurons. Second, the voltage of a neuron’s membrane and its propagation along a synapse isn’t like a binary digit, it’s actually a biological or electrical circuit behaving according to some differential equations. So one has to backpropagate from bad answers not just through finite mathematical functions but through iterations of the integration of differential equations and probabilistic outcomes to figure out which synapse upstream of the decision to reweight or delay. My project involved doing all of this much more complicated backpropagation, without any real neuromorphic physical hardware, so I had to simulate it on an ordinary digital computer, hardware-accelerated with a GPU.
This architecture for computing, which is completely different from the digital computing we are familiar with, is just becoming available. Intel has built neuromorphic hardware (the Intel Loihi chip) that can emulate millions of neurons and will soon approach the scale of actual brains. These systems already use far less power than digital computers calculating similar operations. The problem is that no one is really sure how to program these either. That’s where the research is right now. My own work built off of other data from Caltech, incorporated some approaches others had investigated, and I put together libraries of the simulation elements from a variety of sources. I was motivated out of curiosity and I was simply lucky to have those tools, datasets, and ideas to build on.
But I have been even luckier to have great mentors over the years. One of the reasons I was ready to tackle systems with millions of differential equations on a GPU was because a few years earlier Dr. Mark Petersen, a Los Alamos National Lab (LANL) climate scientist, had taught me the Navier Stokes Fluid dynamics equations for simulating the ocean and the mathematics for forward/backward simulation of their physics. I coded up my own solvers from scratch in a project that got me to, and earned me recognition from NOAA at, the International Science and Engineering Fair and the NM SuperComputing challenge. I simulated tsunamis across varied landscapes to predict the likely hazardous and safe zones for a tsunami originating at any place on earth. The following year I studied neural differential equations, which instead of guessing at the outcome of some physical system actually learned the underlying physical laws of the system from the observations. So when I met Dr. Garret Kenyon, who’s a LANL expert on biologically inspired computing, I had the skills and math needed to get past just wrangling the mechanics of this simulation to actually make it do something interesting. Since then I have gotten to work at LANL as a high school intern with Mark Petersen’s postdocs and graduate students on comparing global ocean models on alternative languages and computing architectures (Fortran vs. julia, and Supercomputer vs GPU). This introduced me to not just doing my own personal projects but to work in close collaboration with others, to present my work in our frequent group meetings, and getting and giving feedback. It’s really the inspiration and introduction to advanced science that I got from these LANL mentors who were the secret sauce to my success.
About the Regeneron Science Talent Search
The Regeneron Science Talent Search, a program of Society for Science since 1942, is the nation’s oldest and most prestigious science and math competition for high school seniors. Each year, nearly 2,000 student entrants submit original research in critically important scientific fields of study and are judged by leading experts in their fields. Unique among high school competitions in the U.S. and around the world, the Regeneron Science Talent Search focuses on identifying, inspiring and engaging the nation’s most promising young scientists who are creating the ideas that could solve society’s most urgent challenges.
In 2017, Regeneron became only the third sponsor of the Science Talent Search as a way to help reward and celebrate the best and brightest young minds and encourage them to pursue careers in STEM as a way to positively impact the world. Through its 10-year, $100 million commitment, Regeneron nearly doubled the overall award distribution to $3.1 million annually, increasing the top award to $250,000 and doubling the awards for the top 300 scholars and their schools to $2,000 each to inspire more young people to engage in science.
Program alumni include recipients of the world’s most coveted science and math honors, including 13 Nobel Prizes, 11 National Medals of Science, six Breakthrough Prizes, 22 MacArthur Foundation Fellowships and two Fields Medals. Learn more at https://www.societyforscience.org/regeneron-sts/.
About Society for Science
Society for Science is dedicated to the achievement of young scientists in independent research and to public engagement in science. Established in 1921, the Society is a nonprofit whose vision is to promote the understanding and appreciation of science and the vital role it plays in human advancement. Through its world-class competitions, including the Regeneron Science Talent Search, the Regeneron International Science and Engineering Fair, the Broadcom MASTERS, and its award-winning magazine, Science News and Science News for Students, Society for Science is committed to inform, educate, and inspire. Learn more at www.societyforscience.org and follow them on Facebook, Twitter, Instagram and Snapchat (Society4Science).