Show simple item record

dc.contributor.authorFatehpuria, Naman
dc.date.accessioned2015-06-26T04:30:29Z
dc.date.available2015-06-26T04:30:29Z
dc.date.issued2014-12
dc.identifier.otherfatehpuria_naman_201412_ms
dc.identifier.urihttp://purl.galileo.usg.edu/uga_etd/fatehpuria_naman_201412_ms
dc.identifier.urihttp://hdl.handle.net/10724/31411
dc.description.abstractWe present an algorithm for Support Vector Machines that can be parallelized effectively. The Algorithm scales up nicely on very large datasets of million training points. Instead of analyzing and optimizing the whole training set in to one support vector machine, the data is split into subsets and each subset is optimized independently on different Support Vector Machine. The result from each Support Vector Machine are then combined to get the trained Support Vector Machine. The high performance is due to low overhead communication between the different Support Vector Machines. In this paper, the runtime performance of the algorithm is tested on a dataset of more than 8 million instances with a speed up of about 20 fold.
dc.languageeng
dc.publisheruga
dc.rightsOn Campus Only Until 2016-12-01
dc.subjectParallel support vector machine, Sequential minimal optimization
dc.titleParallel Support Vector Machines using SMO
dc.typeThesis
dc.description.degreeMS
dc.description.departmentComputer Science
dc.description.majorComputer Science
dc.description.advisorJohn A. Miller
dc.description.committeeJohn A. Miller
dc.description.committeeLakshmish Ramaswamy
dc.description.committeeKrzysztof J. Kochut


Files in this item

FilesSizeFormatView

There are no files associated with this item.

This item appears in the following Collection(s)

Show simple item record