eCommons

 

An Empirical Comparison of Supervised Learning Algorithms Using Different Performance Metrics

Other Titles

Abstract

We present the results of a large-scale empirical comparison between seven learning methods: SVMs, neural nets, decision trees, memory-based learning, bagged trees, boosted trees, and boosted stumps. A novel aspect of our study is that we compare these methods on nine different performance criteria: accuracy, squared error, cross entropy, ROC Area, F-score, precision/recall break-even point, average precision, lift, and probability calibration. The models with the best performance overall are neural nets, SVMs, and bagged trees. However, if we apply Platt calibration to boosted trees, they become the best model overall. Detailed examination of the results shows that even the best models perform poorly on some problems or metrics, and that even the worst models sometimes yield the best performance.

Journal / Series

Volume & Issue

Description

Sponsorship

Date Issued

2005-01-24

Publisher

Cornell University

Keywords

computer science; technical report

Location

Effective Date

Expiration Date

Sector

Employer

Union

Union Local

NAICS

Number of Workers

Committee Chair

Committee Co-Chair

Committee Member

Degree Discipline

Degree Name

Degree Level

Related Version

Related DOI

Related To

Related Part

Based on Related Item

Has Other Format(s)

Part of Related Item

Related To

Related Publication(s)

Link(s) to Related Publication(s)

References

Link(s) to Reference(s)

Previously Published As

http://techreports.library.cornell.edu:8081/Dienst/UI/1.0/Display/cul.cis/TR2005-1973

Government Document

ISBN

ISMN

ISSN

Other Identifiers

Rights

Rights URI

Types

technical report

Accessibility Feature

Accessibility Hazard

Accessibility Summary

Link(s) to Catalog Record