Building a digital athlete: Using AI to rewrite the playbook on NFL player safety

By Jennifer Langton, NFL senior vice president of health and safety innovation

Just as National Football League (NFL) players train and practice to constantly improve, the NFL continually seeks new ways to make the game safer for its players, and more exciting for fans. A key element of this effort is development of the Digital Athlete, a joint effort between the NFL and Amazon Web Services (AWS) that represents the next generation of player health and safety for the league.

The Digital Athlete uses artificial intelligence (AI) and machine learning (ML) to build a complete view of players’ experience, which enables NFL teams to understand precisely what individual players need to stay healthy, recover quickly, and perform at their best. In time, the technology—which is being used across all 32 NFL clubs for the first time this past season—may be able to help predict and prevent injuries.

Building the right team

The NFL kicked off the Digital Athlete to help teams improve how they gathered information about and analyzed injury—a process that was previously siloed and time consuming. After conducting a roadshow to find the right partner to build the technology, the league ultimately teamed with AWS.

“AWS was the obvious choice given its best-in-class cloud technology capabilities and its unmatched ability to collect and make sense of large amounts of data,” says Jennifer Langton, senior vice president of health and safety innovation at the NFL. “We are proud to work closely with AWS on technology that is transforming how NFL teams are approaching player health and safety.”

Player data: A full picture from the field to the cloud

To build a complete view of players’ experience, the Digital Athlete gathers data from a range of sources, including game day data from the Next Generation Stats System (NGS), which uses AWS to capture real-time location, speed, and acceleration data for every player, on every play, on every inch of the field.

It also draws data from video filmed during games and practices along with sensors around the stadium that record performance metrics from tracking devices embedded in players’ equipment. AWS captures and combines player tracking data with other variables like weather, equipment, and play type (e.g., run, punt, etc.) for a dataset that represents a top-down picture of the player experience throughout the game.

The Digital Athlete’s AI- and ML-backed algorithms use these inputs to run millions of simulations of NFL games and specific in-game scenarios to inform teams which players are at the highest risk, insights that inform the development of personalized injury prevention, training, and recovery programs.

Training camp for AI

To harness the power of AI in driving the Digital Athlete, the first step was to teach the AI what to look for. Using ML techniques, the AI is taught to use computer vision to “see” visual information from game footage. To help track head impacts, for example, the Digital Athlete’s AI was first taught to identify helmets by repeated exposure to and ingestion of images of helmets from all angles.

The AI was then taught to recognize helmet impacts and cross-reference visual information with NGS data to determine the players involved. With enough practice, the AI becomes exponentially faster and more reliable than humans at accurately identifying and classifying helmet collisions throughout the game and the season. This gives the Digital Athlete ample data to contextualize game footage and build datasets for running simulated in-game action—and removes the need for a manual injury tracking process that used to require days of work by club training and medical staff.

Using everything it knows so far, the Digital Athlete uses sets of player, weather, equipment, and stadium data to run infinite simulations of any play, reconstructing the conditions of how and when an injury occurred, with no risk to the athletes themselves.

The information gained from these simulations supports Risk Mitigation Modeling, a core component of the Digital Athlete AI that analyzes training data to determine a player’s ideal training volume while minimizing their injury risk, helping players know when they can safely push themselves.

Another core technology of the Digital Athlete is currently in development: 3D Pose Estimation. By assessing how a player’s movements through space and time can lead to certain injuries, Pose Estimation analysis leads to a better understanding of player safety. Clubs can use the tool to better understand on-field scenarios and movements that led to injury.

Capturing a bird’s-eye view of the field

While it is impossible to set up Hollywood-style motion capture on a football field, AWS captures 3D player movements with a carefully calibrated, perfectly synchronized set of 38 cameras positioned in a ring around the football stadium. Each camera captures 5k video at 60 frames per second and uploads it to the AWS Cloud, where the footage is then stored and analyzed.

The AI model can then view any play from 38 different angles, 60 times per second. A computer vision algorithm looks at all the video data and identifies the core and extremities of each player on the field. The synchronized cameras enable the Digital Athlete to determine exactly when and where each player is in time and space.

Using the data to identify each player, it then plots the positioning of player body parts in three dimensions. The plotted points are compiled into virtual player skeletons that track the exact positioning of each of the player’s joints and movements over the course of every play.

Tracking the position of a player’s lower extremities throughout the play enables detailed analysis of movements such as a player’s gait over the course of a game, which can yield insights into whether fatigue may have contributed to an injury.

Thanks to its AI and machine learning backbone, the Digital Athlete can absorb and make sense of a colossal amount of data. During each week of games, the Digital Athlete processes about 6.8 million video frames and documents around 100 million locations and positions of players on the field. And during team practices, the platform processes about 15,000 miles of player tracking data per week, equating to more than 500 million data points (10 Hz location data) per week.

As the technology continues to develop, player visualizations will become even more lifelike and the depth of analysis will increase, allowing for greater insights into the root causes of player injuries.

Writing a safety playbook for the future

For AWS and the NFL, this is only the beginning. Together, they will continue to advance the Digital Athlete even further up the field with new techniques in player modeling that will create a laboratory-quality reconstruction of every player and every play, then automatically store the data in the cloud for future analysis.

AWS and the NFL are also developing the first computer vision models that can detect and measure forces that cause concussions and other injuries. These models will help determine not just when helmet impacts occur, but the amount of force going into each impact.

Moving forward, the Digital Athlete will empower NFL teams to develop individualized training and recovery regimens and conduct real-time risk analysis for player injury during games. This data analysis will continue to drive safety initiatives such as rule changes, new innovations in player safety equipment, and even more refined coaching methods.

Look for AWS and the NFL to continue innovating to revolutionize the health and safety of all athletes, both on and off the field.  To learn about the NFL on AWS, visit https://aws.amazon.com/sports/nfl/.

Similar Posts