Research

I worked on several NLP problems in general. However, the two dominating areas where I worked the most are; machine translation and interpretation of deep NLP models. Here, I have first summarized my research interests. Later, I provide a summary of my work on Machine Translation and Interpretation.


Research Interests

In the following, I provide a non-exhaustive summary of my areas of interest:

  • Research Interests
    • Applied deep learning and machine learning, unsupervised and semi-supervised learning methods, interpretability and manipulation of neural models, generalization, multi-task learning, transfer learning, representation learning, efficient modeling
    • Natural language processing, statistical and neural machine translation, transliteration, domain adaptation, NLP for resource poor languages, and social media content processing and analysis
  • Application Interests
    • Building large scale practical systems, issues related to deployment of models, problem solving from the perspective of end user, machine translation competitions
  • Coaching Interests
    • Deep learning from scratch, explanation of models by developing intuition from real world examples, making understanding theory easy with animations, practical insight of deep learning models
  • Entrepreneurial Interests
    • Lean startup, technology transfer, business development, customer validation

Back to top


Interpretation

Highlights

Media coverage: Our work on NeuroX - Analyzing and Controlling Individual Neurons is featured at MIT news and several AI blogs.

Media coverage: Our work on understanding Neural Machine Translation has made it to MIT news and been picked by several channels like ScienceBlog, ScienceDaily

I am interested in interpreting and understanding the learning dynamics of deep neural network models. I have worked on analyzing whole vector representations and individual neurons in the network and answer questions such as:

  • How much linguistic knowledge is learned?
  • How focused and distributed is the information?
  • What is the role of individual neurons?

I showed that the interpretation analysis enables us to:

  • control bias in our models by manipulating individual neurons
  • reduce the model size and speed up inference time by removing irrelevant and redundant neurons
  • improve model performance by injecting linguistic information in a multitask setting

The interpretation work is mainly done in collaboration with CSAIL MIT. The work has been published at prestigious venues, such as ICLR, AAAI, ACL, etc., and it has been covered by several technology blogs a couple of times.

Back to top


Machine Translation

Highlights

Live Speech Translation System

Machine Translation System

First industry scale dialectal Arabic to English machine translation system

Machine Translation licensed to KanariAI

Live Speech Translation won the best Innovation award at @ARC’18, see media coverage

I have worked on both statistical and neural machine translation, involving several language pairs such as English, German, Russian, Arabic, Hebrew, etc. I have been interested in improving the translation of resource-poor languages and morphologically-rich languages. Additionally, I worked on domain adaptation and the handling of unknown words. My research work has been published in top tier venues such as ACL, NAACL, EMNLP, etc.

In addition to research, I have expertise in building industry-grade and customized machine translation systems. As of July 2020, our system has translated 950 million tokens. The system has been used by Aljazeera, BBC, and DW, and is deployed as part of the H2020 SUMMA project.

Back to top

Avatar
Hassan Sajjad
Associate Professor

My research interests include natural language processing, machine translation and deep learning