Emergent Mind

Abstract

Protecting image manipulation detectors against perfect knowledge attacks requires the adoption of detector architectures which are intrinsically difficult to attack. In this paper, we do so, by exploiting a recently proposed multiple-classifier architecture combining the improved security of 1-Class (1C) classification and the good performance ensured by conventional 2-Class (2C) classification in the absence of attacks. The architecture, also known as 1.5-Class (1.5C) classifier, consists of one 2C classifier and two 1C classifiers run in parallel followed by a final 1C classifier. In our system, the first three classifiers are implemented by means of Support Vector Machines (SVM) fed with SPAM features. The outputs of such classifiers are then processed by a final 1C SVM in charge of making the final decision. Particular care is taken to design a proper strategy to train the SVMs the 1.5C classifier relies on. This is a crucial task, due to the difficulty of training the two 1C classifiers at the front end of the system. We assessed the performance of the proposed solution with regard to three manipulation detection tasks, namely image resizing, contrast enhancement and median filtering. As a result, the security improvement allowed by the 1.5C architecture with respect to a conventional 2C solution is confirmed, with a performance loss in the absence of attacks that remains at a negligible level.

We're not able to analyze this paper right now due to high demand.

Please check back later (sorry!).

Generate a summary of this paper on our Pro plan:

We ran into a problem analyzing this paper.

Newsletter

Get summaries of trending comp sci papers delivered straight to your inbox:

Unsubscribe anytime.