Finding the Best Fighters

Fighter ratings—who belongs in the top ten, and in what order?—are a perennial topic for fans of Mixed Martial Arts (MMA). When considering a fighter’s accomplishments, we usually begin with his win/loss ratio, which—while a good first estimate—fails to take into account the quality of opposition he has overcome. A fighter with an unblemished record of wrecking scrubs in regional promotions does not deserve a higher rating that one who has a winning but mixed record against world-class opponents.

Once we seek to rate a fighter relative to his opponents, we run the problem that in order to rate one fighter we must first rate his opponents, but to rate his opponents we must rate their opponents, one of which is the fighter we started with in the first place, placing us in a kind of infinite loop. Some speculators try to get around this using a form of fight calculus along the lines of x beat y, y beat z, so x can beat z. However, serious fans know that the talents and specialities of a given fighter serve like a player’s choice in rochambeau: that paper beats rock and rock beats scissors does not mean that paper beats scissors. Ultimately, aesthetic appreciation starts to creep into the argument, the grappler preferring the fellow with the finest jiu-jitsu, the kickboxer giving a higher rating to the best striker, and so on.

In thinking about this problem, it occurred to me that it should be possible to apply some mathematics in order to generate a fairly objective set of ratings. Specifically, one could—if he had a complete dataset for fights and fighters—build a graph linking each fighter to every fighter he has beaten, then work backward from the edges, tallying victories along the way.

Acquiring complete data for all MMA matches in all organizations over the last 15+ years appeared quite difficult, so I chose to test my algorithm on a subset of the total data. I started by harvesting a complete record of the fights in PRIDE FC from public sources, then calculated fighter rankings within that organization without considering the record of any fighter in other promotions.

The resulting list of the top ten fighters in PRIDE FC: Fedor Emelianenko, Wanderlei Silva, Mirko “Cro Cop” Filipović, Antônio Rodrigo “Big Nog” Nogueira, Antonio Rogério “Lil’ Nog” Nogueira, Ricardo Arona, Mauricio Rua, Mark Hunt, Dan Henderson, and Mark Coleman


The scoring algorithm produced a few clusters that tell us something about the relative records of these fighters. Fedor Emelianenko is an outlier, significantly outperforming the rest. Wanderlei Silva has a solid hold on second place, largely because of sadistic matchmaking (his three maulings of Kazushi Sakuraba do much for his rating). Mirko “Cro Cop” Filipovic and Antônio Rodrigo Nogueira (from here on, “Big Nog”) are essentially tied for third, while Antônio Rogério Nogueira (“Little Nog”) scores halfway between them and the cluster around Arona and Rua. The bottom three are more or less tied for last.

The eleventh-ranked fighter was Takinori Gomi—far and away the best fighter with the best record in the lighter weight bracketsAll apologies to my personal favorite, Kid Yamamoto.—who narrowly missed the top ten because he never faced any of the heavyweights who made the list.

List in hand, I grew curious as what happened when the top fighters fought each other. Working my way through the database by hand to answer these questions was onerous, so I produced a visualization rendering all of the details in one graph.

The circle by each fighter’s name is scaled to his relative rating. Arrows point from victor to vanquished, each annotated with the manner in which the fight ended.

This visualization helps to explain some of the aforementioned rating clusters: Fedor beat everyone put in front of him; Big Nog beat everyone but Fedor; Dan Henderson, despite his considerable skill and superb overall record, lost all his fights against top-rated competition.

These results were interesting enough that I acquired complete fight records for the UFC and generated both a top ten list and a graph for that organization: Georges St. Pierre, Chuck Liddell, Randy Couture, Anderson Silva, Rich Franklin, Tito Ortiz, Matt Hughes, Lyoto Machida / Yushin Okami (tied).

Georges St. Pierre (“GSP”) is as much of an outlier in the UFC as Fedor was in PRIDE, and GSP’s two career losses do nothing to diminish that. Chuck Liddell is firmly in second place, while Randy Couture and Anderson Silva are nearly tied for third. Tim Sylvia—not a particularly well admired fighter—is tied with Matt Hughes, a welterweight legend. Machida and Okami are tied for ninth place.

The UFC is a larger promotion with more weight classes, so I feel it necessary to give honorable mention to the runners up (in order): Diego Sanchez, B.J. Penn, Jon Fitch, Keith Jardine, Rashad Evans, Frankie Edgar, Thiago Alves and Dan Henderson.

The circle by each fighter’s name is scaled to his relative rating. Arrows point from victor to vanquished, each annotated with the manner in which the fight ended.

This graph is confounded by the UFC’s greater number of weight classes, which segment the fighters into unconnected sub-graphs. I would very much like to produce a set of statistics for the UFC divided by weight class, but the data is not currently annotated with that information, and manually entering the weights for the nearly 1200 fights in the database is unappealing.

In the meantime: GSP and Hughes stand out at welterweight, several of the greatest fighters in the history of the promotion fight at light heavyweight, and Randy Couture is the only man to have beaten top ten fighters in the light heavyweight and heavyweight divisions.


No method is perfect, and no matter how one arrives at these sorts of ratings it’s important to keep in mind that anything can happen in a fight, that any fighter can win or lose. Also, ratings are by necessity retrospective rather than prospective. We know who beat whom in the past, who was the better man on a given night, but we do not know what has changed since then—age, injuries, lapses in training—that might bring us a different fighter who happens to carry the same name and face.

Acknowledgements and Further Reading

The data used to produce these findings was scraped from Sherdog. The visualizations were produced with some custom Clojure code.

The algorithm went through several revisions. Among the rejected candidates were variations on the Elo rating system for players of Go and chess, the Kendall-Wei algorithm (an eigenvector-based method of which Google’s PageRank is a special case), Bayesian analysis, and a few Hamiltonian cycle-based approaches. The statistical methods were ill-suited to this purpose because there were so few samples. A computationally expensive graph traversal related to the Hamiltonian ranking was the runner up, but it produced a number of upsets (including rating Sokoudjou as PRIDE’s ninth best fighter). The final algorithm has much in common with the algorithm recommended in Ranking from unbalanced paired-comparison data by HA David, though—sadly—I completed the work before finding that paper.

In case you’d like to experiment with them, I’ve made the data publicly available via Google Spreadsheets–PRIDE FC, UFC.


This entry is part of Jack Rusher’s archive, originally published August 24th, 2009, in New York.