A novel transformer attention-based approach for sarcasm detection

Shumaila Khan, Iqbal Qasim, Wahab Khan, Khursheed Aurangzeb, Javed Ali Khan, Muhammad Shahid Anwar

Research output: Contribution to journalArticlepeer-review

Abstract

Sarcasm detection is challenging in natural language processing (NLP) due to its implicit nature, particularly in low-resource languages. Despite limited linguistic resources, researchers have focused on detecting sarcasm on social media platforms, leading to the development of specialized algorithms and models tailored for Urdu text. Researchers have significantly improved sarcasm detection accuracy by analysing patterns and linguistic cues unique to the language, thereby advancing NLP capabilities in low-resource languages and facilitating better communication within diverse online communities. This work introduces UrduSarcasmNet, a novel architecture using cascaded group multi-head attention, which is an innovative deep-learning approach that employs cascaded group multi-head attention techniques to enhance effectiveness. By employing a series of attention heads in a cascading manner, our model captures both local and global contexts, facilitating a more comprehensive understanding of the text. Adding a group attention mechanism enables simultaneous consideration of various sub-topics within the content, thereby enriching the model's effectiveness. The proposed UrduSarcasmNet approach is validated with the Urdu-sarcastic-tweets-dataset (UST) dataset, which has been curated for this purpose. Our experimental results on the UST dataset show that the proposed UrduSarcasmNet framework outperforms the simple-attention mechanism and other state-of-the-art models. This research significantly enhances natural language processing (NLP) and provides valuable insights for improving sarcasm recognition tools in low-resource languages like Urdu.

Original languageEnglish
Article numbere13686
Pages (from-to)1-19
Number of pages19
JournalExpert Systems
Early online date23 Jul 2024
DOIs
Publication statusE-pub ahead of print - 23 Jul 2024

Keywords

  • attention models
  • deep learning
  • machine learning
  • natural language processing
  • sarcasm identification
  • sentiment analyses

Fingerprint

Dive into the research topics of 'A novel transformer attention-based approach for sarcasm detection'. Together they form a unique fingerprint.

Cite this