**A Deep Learning approach for the Motion Picture Content Rating** Monica Gruosso^1, Nicola Capece^2, Ugo Erra^1, Nunzio Lopardo^1 ^1 Department of Mathematics, Computer Science and Economics, ^2 School of Engineering University of Basilicata, Potenza, Italy 85100 monica.gruosso@unibas.it, nicola.capece@unibas.it, ugo.erra@unibas.it, nunziolop95@gmail.com ![](teaser.png width="100%")

_We used a rating scale to manually label the collected data: non-violent frame (General Audiences, G), which corresponds to images a child can watch; violent frame (Parents Strongly Cautioned, PG-13), which indicates that image is not recommended for pre-teenagers because it contains weapons, fire, explosions or hand-to-hand combat; very violent frame (Restricted, R), whose vision is not allowed to under 17 without parent or adult guardian and includes bloody and very realistic violence, characterized by elements such as blood, death or physical torture._ Abstract =============================================================================== The film industry brings thousands of films to life every year. Not all of them are suitable for everyone, especially those with violent content. A content rating system is designed for evaluating the content and reporting the suitability for children, teenagers, or adults. It assists content providers during the assignment of rating levels for movies and, on the other hand, it can be useful for users to block violent content directly on their devices. However, applying for content ratings for movies can be tedious, prone to personal judgment, and also impossible if we consider also the videos on video-sharing websites. This work provides a motion picture content rating model to automatically classify and censor violent scenes using a Deep Learning (DL) approach. We collect a large amount of data searching for visual element, such as blood or weapons, and manually label them according to a rating scale. We then employ the Convolutional Neural Network (CNN) [Inception v3](https://www.cv-foundation.org/openaccess/content_cvpr_2016/papers/Szegedy_Rethinking_the_Inception_CVPR_2016_paper.pdf) for training and validating. The CNN is modified and additional regularization techniques were adopted to avoid overfitting during the training. Finally, we design a video post-processing algorithm to refine the network output. Preliminary results demonstrate the effectiveness of our automatic classifier for supporting content providers to assign the rating and encourage further investigations on the use of DL. Overview =============================================================================== ![](architecture_inceptionv3.png width="100%")

This figure shows the architecture of our three-class classifier based on [Inception v3](https://www.cv-foundation.org/openaccess/content_cvpr_2016/papers/Szegedy_Rethinking_the_Inception_CVPR_2016_paper.pdf) network. A dropout layer was added after each average pooling layer of the last two inception modules, which are bordered by a blue dotted line. Instead in case of the binary classifier, only the last dropout layer was left, which is located before the last fully connected layer. All layers are drawn with blocks of the same size to simplify their graphical representation. Video =============================================================================== ![A video](video_contentRatingModel.mp4) BibTeX =============================================================================== ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ @INPROCEEDINGS{"gruosso2019deep", title = "A Deep Learning approach for the Motion Picture Content Rating", author = "Gruosso, Monica and Capece, Nicola and Erra, Ugo and Lopardo, Nunzio", booktitle = "2019 10th IEEE International Conference on Cognitive Infocommunications (CogInfoCom)", year = "2019", pages = "137-142", doi = "10.1109/CogInfoCom47531.2019.9089897", month = "Oct", } ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Resources =============================================================================== | Download | Description | |:--------:|:-----------:| | | Train/Validation Dataset | | | Test Dataset | | | Official pubblication: 10th IEEE International Conference on Cognitive Infocommunications (CogInfoCom 2019) | Acknowledgments =============================================================================== The authors thank NVIDIA's Academic Research Team for providing the GeForce GTX 1080 Ti card under the Hardware Donation Program. Award =============================================================================== Best Paper Award for the 10th IEEE International Conference on Cognitive Infocommunications ![](Best_Paper.jpg width="100%")