Pprn pic

Материал из Skunkpedia
Перейти к: навигация, поиск

Use git or furnish via svn using a web url.

Work quickly with the official command line interface provided . Learn more about the command line interface.

– Open with github desktop– download zip file

Requires system access

To use codespaces, please login.

Running github desktop

If you agree, nothing happens, download github desktop and try again try.

starting xcode

If you agree, nothing happens, download xcode and try again.

Starting visual studio code

Your code space will start up when ready.

There was a problem probing your code beforehand, please try again.

Last commit

Git stats

- 12 commits

Files

Readme.Md

Laion-safety

Open kit for nsfw and toxicity detection

Overview

We present the nsfw image-text pair classification ensemble, consisting of a file classifier from efficientnet v2, b2 260x260, https://github.Com/google/automl/tree/master/efficientnetv2) in conjunction with detoxify (https:// github.Com/unitaryai/detoxify) , an existing language friend to detect toxicity.

The image classifier was trained on 682550 images from five https://made.porn classes "drawing" (39026), "hentai" (28134), "neutral" (369507), "porn" (207969) and sex appeal (37914).

In order to analyze the performance of the image classifier along with additional information from detoxify and in the absence of it, we put together a test, checked manually. Set, which includes 4900 samples, including pictures and notes for which.

To use our 5-class image classifier as a binary sfw - nsfw classifier, we look at images from the ornament and "neutral" as sfw and "manga, intimate video - and exciting" as nsfw.

--> our image classifier correctly predicts 96.45% of true nsfw as nsfw and discards 7.96 % of sfw images are wrong as nsfw.

False negatives: 3.55%

False positives: 7.96%

We comparing our replaceable male accessory with the best nsfw classifier from github user gantman (https://github.Com/gantman/nsfw_model, inception v3, keras 299x299), to the best of our knowledge, the best public nsfw classifier at the time of writing:

False negatives: 5.90%

False positives: 7.52%

--> our image classifier predicts at least two % less false negatives by predicting about 0.5% more sfw images as nsfw. Since the reduction in false negative rate is most important as a general rule, a slightly increased false positive rate should be acceptable as a rule of thumb. "Sexual_explicity" with the softmax scores of the image classifier before determining sections with the highest score.

This group archives certain factors:

False negatives: 2.22%

False positives: 5.33%

--> this ensemble predicts 1-3% fewer false negatives and 2.6% fewer false signals than our classifier alone images.

If you like this article and the user would like to get more data regarding free entries in movies 18+ (https://made.porn/) please pay a visit to our web resource via the world wide web.