The National Highway Traffic Safety Administration has opened a preliminary investigation after a Waymo autonomous vehicle struck a child near an elementary school in Santa Monica on January 23, prompting renewed scrutiny of how self-driving cars behave in school zones. The inquiry will examine the vehicle’s speed, реакtions to visual cues like crossing guards and parked cars, and how the company handled the incident afterward. This case highlights the tension between promising safety technology and real-world unpredictability where children are present. What regulators conclude could change where and when these systems are allowed to operate.
The crash took place within two blocks of an elementary school during morning drop-off, a time and place with heavy pedestrian activity and vehicle congestion. Witnesses reported double-parked cars and an active crossing guard in the area, which creates a complex scene for any driver, human or machine. Officials say the child came into the roadway from behind a double-parked SUV and sustained minor injuries after contact with the vehicle. No human safety operator was present inside the car at the time of the event.
NHTSA’s Office of Defects Investigation is focused on whether the automated driving system exercised appropriate caution in that sensitive environment. Investigators will look at whether the vehicle respected posted speed limits and whether its sensors and decision logic appropriately handled the visual clutter of parked vehicles and pedestrians. The agency will also review the vehicle’s post-impact behavior and how Waymo reported and responded to the incident. That review aims to determine if the system met federal safety expectations for operation near schools.
Waymo told regulators it contacted them the same day and said it will cooperate with the probe. The company argues its technology reduced the impact speed dramatically compared with a human driver in the same scenario, and it emphasized the need to evaluate outcomes against peer-reviewed models. Officials will compare the system’s recorded behavior to expected performance and to how a fully attentive human driver might have acted. The findings could shape operational limits, reporting requirements, and how much local oversight cities can impose.
“At Waymo, we are committed to improving road safety, both for our riders and all those with whom we share the road. Part of that commitment is being transparent when incidents occur, which is why we are sharing details regarding an event in Santa Monica, California, on Friday, January 23, where one of our vehicles made contact with a young pedestrian. Following the event, we voluntarily contacted the National Highway Traffic Safety Administration (NHTSA) that same day. NHTSA has indicated to us that they intend to open an investigation into this incident, and we will cooperate fully with them throughout the process.
“The event occurred when the pedestrian suddenly entered the roadway from behind a tall SUV, moving directly into our vehicle’s path. Our technology immediately detected the individual as soon as they began to emerge from behind the stopped vehicle. The Waymo Driver braked hard, reducing speed from approximately 17 mph to under 6 mph before contact was made.
“To put this in perspective, our peer-reviewed model shows that a fully attentive human driver in this same situation would have made contact with the pedestrian at approximately 14 mph. This significant reduction in impact speed and severity is a demonstration of the material safety benefit of the Waymo Driver.
“Following contact, the pedestrian stood up immediately, walked to the sidewalk and we called 911. The vehicle remained stopped, moved to the side of the road and stayed there until law enforcement cleared the vehicle to leave the scene.
“This event demonstrates the critical value of our safety systems. We remain committed to improving road safety where we operate as we continue on our mission to be the world’s most trusted driver.”
Waymo operates under Level 4 autonomy, which means its vehicles are designed to handle all driving tasks within defined service areas without a human driver on board. Level 4 platforms are not sold to consumers and are limited to specific ride-hailing services and cities. That operational distinction matters because regulators treat these fleets differently than ordinary cars and can impose geographic or time-of-day constraints. How strictly those boundaries are enforced could change as a result of this inquiry.
This probe follows an earlier NHTSA review that examined instances of Waymo vehicles striking stationary objects or apparently not following traffic controls; that prior evaluation was closed after regulators reviewed data and company responses. Still, safety advocates say unresolved questions remain about how autonomous systems handle the messy, dynamic scenes around schools and playgrounds. Those unresolved issues are exactly what NHTSA will be testing now.
For people who live where driverless cars operate, the stakes are practical and immediate: families expect that school drop-off zones remain safe, and commuters want clarity about where these systems can and cannot operate. Regulators will be weighing technical evidence and real-world risk to decide whether new operational limits, stricter reporting, or additional local oversight are needed. The outcome could reshape the pace and geography of autonomous vehicle rollouts across the country.
As the investigation proceeds, observers will watch how quickly regulators can turn data into rules and whether the findings prompt tighter constraints around vulnerable locations like schools. The episode is a reminder that technological gains must prove themselves not just in controlled tests but in chaotic, everyday environments where human behavior remains unpredictable. Parents, riders, and city officials will all be watching the next steps closely.
