The College Football Playoff delivered plenty of excitement during its quarterfinal matchups from December 31 to January 2. The most drama unfolded in the double-overtime Peach Bowl between Texas and Arizona State. Both teams squandered late opportunities, including two missed field goals by the Longhorns and a stalled drive by the Sun Devils in the final minutes of the fourth quarter. The late push by Arizona State sparked controversy when the drive ended after Texas DB Michael Taaffe delivered a big hit on Arizona State WR on third-and-15. While the play was reviewed for targeting, the on-field ruling of a no-call was upheld. The Sun Devils were forced to punt, but the debate on the internet continued to rage.
This no-call proved pivotal, as it turned what would have been a first-and-10 around the Texas 35-yard line into a fourth-and-5 near midfield. The game went into overtime, with Texas ultimately emerging victorious. The targeting rule has long been a source of controversy, often influenced by subjective judgment. This begs the question of whether AI could be implemented to provide consistency and standardization in making such calls.
Targeting (Rule 9-1-4) is defined by the NCAA rule book as when “a player takes aim at an opponent for purposes of attacking with forcible contact that goes beyond making a legal tackle or a legal block or playing the ball.” The rule was initially introduced in 2008 as a 15-yard penalty and was amended in 2013 to include ejection from the game. If the penalty is assessed in the second half, the ejection carries over to the first half of the next game.
Because of the weight the consequences of a targeting call carries on the outcome of a game and the subjectivity involved in assessing the penalty, targeting has become one of the most controversial rules in college football. Supporters of the rule point to its importance in maintaining player safety and preventing concussions and other serious head and neck injuries. Opponents of the rule often cite the overly cautious enforcement of the rule that does not take into account the full context of the play. The NCAA rule book does provide some indicators of targeting including:
The rule book also provides several examples of defenseless players, but there is still plenty of room for interpretation. While the rulebook aims to create clarity, the variability in enforcement is evident from game to game, as officials may have differing opinions on the level of impact or the intent of the tackler. This subjectivity can lead to inconsistencies in how the rule is applied, potentially undermining the fairness of the game.
Following the no-call decision during the Peach Bowl, the Big 12 Commissioner, Brett Yormark, called for greater consistency and normalization in the way that targeting is called in college football. Other sports and professional leagues have turned to AI as a solution to improve consistency in important calls.
In 2018, FIFA introduced the VAR (Video Assistant Referee) into World Cup play. A year later, soccer’s most popular league, the English Premier League adopted the technology, as well. Both FIFA and the Premier League state insist that VAR is a decision support tool rather than a decision-making tool. This means VAR provides additional evidence to referees during critical moments, such as determining “factual offside, accidental handball by the goalscorer, if a foul was committed inside or outside the penalty area, the ball out of play, goalkeeper movement or encroachment of the penalty area at a spot-kick, or mistaken identity.” The intention of VAR is to reduce the number of incorrect calls in soccer matches. However, the system has sparked controversy. Critics argue that it slows down the game and adds unnecessary complexity. In more egregious cases, VAR has altered the outcome of soccer matches.
Major League Baseball has also taken steps towards implementing AI in officiating via an automated strike zone. Last season, an automated ball-strike (ABS) system debuted at the Triple-A level. The MLB experimented with two different implementations: fully automated ABS and a challenge system ABS. In the fully automated ABS system, all ball and strike calls are made by Hawk-Eye technology. In the challenge system, human umpires continue to call balls and strikes as usual, but each team is allowed to challenge a certain number of calls per game, with reviews conducted by the ABS system. So far, both players and fans prefer the challenge system to both the fully automated ABS and to traditional umpire only calls. According to commissioner Rob Manfred told ESPN that the greatest barrier to implementation in the major leagues is that it is difficult to set a personalized strike zone for each player.
Both VAR and ABS represent important steps towards increased AI integration in sports, but they also highlight some of the major challenges to broad implementation at the highest levels of competition.
AI systems, powered by deep learning algorithms and computer vision, have gained considerable attention in recent years. With sufficient computational and data resources, these algorithms can be trained to detect patterns in player behavior, tracking key factors like head and body positioning, the force of impact, and the timing of hits. AI’s potential lies not only in its ability to make decisions quickly but also in its capacity to do so with greater consistency. Human referees, even with their expertise and access to slow-motion replays, often enforce rules inconsistently due to the intensity of the game, subjectivity, or human error. In contrast, AI can analyze thousands of video frames, examining each one for signs of targeting infractions, without fatigue or bias. This could ensure more consistent calls, reducing the controversy often seen in high-stakes games.
Despite significant advancements in AI, the technology remains too nascent to make an immediate debut in top-tier sports. Training algorithms requires vast amounts of high-quality labeled video data. That means that, even if a database of all college football hits ever televised were available, each clip would still need to be reviewed by a human to determine whether it constitutes targeting. This determination is called the label. Training an AI model requires both the data and the labels to function.
As an anecdotal experiment, I prompted OpenAI’s Sora to generate a video of targeting in college football. The result was a mess of bungled anatomy, physics-defying motion, and incoherent player interactions that failed to resemble any realistic football play. While generating a video of targeting is an admittedly more challenging for AI than determining whether a real-world video contains targeting, this experiment highlights the technical difficulties AI faces in accurately replicating the complex dynamics of live-action sports.
Moreover, AI systems must be capable of processing video footage in real-time to identify potential infractions as they occur, all while managing the computational load. This requires high-performance hardware and low-latency video processing, which can be both costly and technically challenging to implement at scale during live events. Cost is already a barrier to the global adoption of VAR, and similar revenue divides exist between college football conferences.
Perhaps most importantly, it remains unclear about the willingness of fans, players, and university administrators to implement AI as a decision maker in the sport. There is historical precedent for rejecting computer-based analysis in favor of human subjectivity. Look no further than the abandonment of computer ranking systems of the BCS era in favor of the all-human selection committee of the College Football Playoff.
AI could provide the ultimate level of standardization in how penalties are called across games, but its implementation remains a distant goal. Should the college football community choose to embrace greater automation, it will require significant investment and collaboration—not only from universities and conferences, but also from AI vendors and television broadcast networks. Investments will need to address both the technical and cultural barriers to entry but could result in a less controversial game where referee subjectivity is minimized.
Iowa's Phil Parker, Tim Lester see progress ahead of Music City BowlThe Hawkeyes defensive and offensive coordinators spoke for about 11 minutes to preview Mond
Kyle Bonagura, ESPN Staff WriterJan 7, 2025, 04:12 PM ETCloseCovers college football.Joined ESPN in 2014.Attended Washington State University.Northern Illinois
The Mountain West Conference will have nine football-playing members for the 2026 season. Northern Illinois was officially introduced as the latest member of t