Profanity is the use of offensive, obscene, or abusive vocables or expressions in public conversations. A big source of conversations in text format nowadays are digital media such as forums, blogs, or social networks where malicious users are taking advantage of their ample worldwide coverage to disseminate undesired profanity aimed at insulting or denigrating opinions, names, or trademarks. Lexicon-based exact comparisons are the most common filters used to prevent such attacks in these media; however, ingenious users are disguising profanity using transliteration or masking of the original vocable while still conveying its intended semantic (e.g. by writing piss as P!55 or p.i.s.s), hence defeating the filter. Recent approaches to this problem, inspired in the sequence alignment methods from comparative genomics in bioinformatics, have shown promise in unmasking such guises. Building upon those techniques we have developed an experimental Web forum (ForumForte) where user comments are cleaned of disguised profanity. In this paper we discuss briefly the techniques and main engineering artefacts obtained during the developing of the software. Empirical evidence reveals filtering effectiveness between 84% and 97% at vocable level depending on the length of the profanity (with more than four letters), and 86% at sentence level when tested in two sets of real user-generated-comments written in Spanish and Portuguese. These results suggest the suitability of the software as a language-independent tool.
web forum, profanity detection, text analysisforos web, detección de obscenidades, análisis de texto
 S. Sood, J. Antin, and E. Churchill, “Profanity use in online communities,” in Proc. SIGCHI Conf. Human Factors in Computing Systems, ACM, 2012, pp. 1481–1490.
 W. Wang, L. Chen, K. Thirunarayan, and A. P. Sheth, “Cursing in English on Twitter,” in Proc. 17th ACM Conf. Comput. Supported Cooperative Work & Social Computing, 2014.
 M.-E. Maurer and L. Höfer, “Sophisticated phishers make more spelling mistakes: using URL similarity against phishing,” in Cyberspace Safety and Security. Berlin: Springer, 2012, pp. 414–426.
 S. A. Rojas-Galeano, “Revealing non-alphabetical guises of spam-trigger vocables,” DYNA, vol. 80, pp. 15-24, 2013.
 X. Zhong, “Deobfuscation based on edit distance algorithm for spam filtering,” in Machine Learning and Cybernetics (ICMLC), 2014 International Conference on, vol. 1. IEEE, 2014, pp. 109–114.
 V. P. Cardona-Zea and S. A. Rojas-Galeano, “Recognizing irregular answers in automatic assessment of fill-in-the-blank tests,” in Engineering Applications (WEA), 2012 Workshop on, 2012, pp. 1–4.
 S. A. Rojas-Galeano, “Towards automatic recognition of irregular, short-open answers in Fill-in-the-blank tests,” Tecnura, vol. 18, 2014.
 C. Mogollón Pinzón and S. Rojas-Galeano, “A genomic-based profanity-safe Web forum,” Proc. 10th Colombian Computing Conference, IEEExplore, 2015.
 S. B. Needleman and C. D. Wunsch, “A general method applicable to the search for similarities in the amino acid sequence of two proteins,” J. Mol. Biol., vol. 48, no. 3, pp. 443-453, 1970.
 T. F. Smith and M. S. Waterman, “Identification of common molecular subsequences,” J. Mol. Biol., vol. 147, no. 1, 1981.
 D. Venema. “Evolution basics: Genomes as ancient texts”. The BioLogos Forum. [Online]. Available: http://biologos.org/
 R. A. Wagner and M. J. Fischer, “The string-to-string correction problem,” J. ACM, vol. 21, pp. 168–173, 1974.
 V. I. Levenshtein, “Binary codes capable of correcting deletions, insertions, and reversals,” Sov Phys Doklady, vol. 10, no. 8, 1966.
 A. Leff and J. T. Rayfield, “Web-application development using the model/view/controller design pattern,” in Enterprise Distributed Object Computing Conference, Proc. Fifth IEEE Int., 2001.
 D. Alur, D. Malks, J. Crupi, G. Booch, and M. Fowler, “Core J2EE Patterns (Core Design Series): Best Practices and Design Strategies”. Sun Microsystems, 2003.
 G. Laboreiro and E. Oliveira, “What we can learn from looking at profanity,” Computational Processing of the Portuguese Language. Berlin: Springer, 2014, pp. 108-113.
 P. Burnap and M.L. Williams, “Us and them: Identifying cyber hate on Twitter across multiple protected characteristics,” EPJ Data Sci., vol. 5, no. 1, pp. 1-15, 2016.
 H. Hosseinmardi et al., “Analyzing labeled cyberbullying incidents on the Instagram social network,” Social Informatics: 7th Int. Conf. (SocInfo 2015), Beijing, China, December 9-12, 2015, 2015, pp.49-66.
 S. H. Yadav and P. Manwatkar, “An Approach for offensive text detection and prevention in social networks,” Innovations in Information Embedded and Communication Systems (ICIIECS), 2015 IEEE 2nd International Conference on. IEEExplore, 2015.