cess on object detection in computer vision field. The gen- work to interpret all the asymmetric methods as AdaBoost, eral principle of AdaBoost[1] is to linearly combine a series clarify their relationships, and further derive the superior of weak classifiers to produce a superior classifier. Each real-valued cost-sensitive boosting algorithms which adopt weak classifier consists of a prediction and a confidence confidence-rated weak learners to reduce the upper bound value and each sample in the training set has an associated of training error. weight. At each iteration, AdaBoost chooses the best weak In this paper, we give a detailed discussion about the classifier to minimize the upper bound of training error, in- various discrete asymmetric extensions, divide them into creases the weights of wrongly classified training samples, three groups according to the different upper bounds of and decreases the weights of correctly classified samples. the asymmetric training error and clarify their relations to Benefiting from this scheme, many AdaBoost based object the loss minimization of AdaBoost with some reformula- detecting algorithms for face[2−5] and pedestrian[6−9] have tions and improvements. Then, the real-valued asymmet- been propos