Content Classification
Classify the content of your ads and landing pages by identifying labels, logos, and unsafe content
What is Content Classification?
AdSecure started a comprehensive update focusing on the classification of content. In this first phase, ad creatives are analysed and the classification of content is reported in various dashboards in the AdSecure application. This new functionality will be powered by a combination of Google API and our own built-in machine learning tools.
We understand that content classification and detection may not be relevant for all of our partners, so we operate on a voluntary, opt-in basis. Please let us know in case you would like to have 1 or more modules for content classification enabled on your account.
Modules
Unsafe Content
The first module is Unsafe Content and uses five categories (adult, spoof, medical, violence, and racy) which will be flagged when it is likely that a given ad creative or landing page contains such content.
Analytical results can be viewed in the dashboard Unsafe Content in the Analytics menu of your AdSecure application.
In addition to the dashboard, there are 5 corresponding Unsafe Content violations introduced which will each be displayed in AdSecure reports when detected, just like you are used to from other violations in AdSecure. Please note that we can disable one or more detections if desired.
- Unsafe content: Adult- Unsafe content: Spoof
- Unsafe content: Medical
- Unsafe content: Violence
- Unsafe content: Racy
Ad Labels
The second module is Ad Labels, which reports information about entities in an ad image, across a broad group of categories. Labels can identify general objects, locations, activities, animal species, products, and more.
Ad Labels has its own dedicated dashboard under the Analytics menu and can give you great insight into what exactly is appearing in your ad images, particularly if you have concerns about certain products, industries, etc.