Social media has become an integral part of our everyday life. It allows the rapid dissemination of information and opinions. However, sometimes people use such media to express a range of phenomenon that often overlap and intersect, and includes a variety of types of speech that cause different harms. Such speech is collectively known as harmful speech. In this talk, I will focus on how we can use enablers to tackle the problem with the help of moderators. These enablers include detection systems capable of identifying various forms of harmful messages, explainers to add explainability in such systems and mitigation systems to regulate spread of harmful speech. All these systems are aimed at reducing the efforts of moderators, thereby making the moderation pipelines more efficient.