I completed my M.S. from the department of Computer Science at the University of Virginia in 2018. Prior to joining the University of Virginia, I was working as a Lecturer in the Computer Science and Engineering department at the BRAC University. I completed my B.Sc. in Computer Science and Engineering from Bangladesh University of Engineering and Technology (BUET) in 2013.
MS in Computer Science, 2018
University of Virginia
B.Sc. in Computer Science and Engineering, 2013
Bangladesh University of Engineering and Technology (BUET)
All publications list here
We explore whether a general purpose search engine like Google is an optimal choice for code-related searches.
We perform an empirical study to understand the interaction between IR-based similarity measures and document types, and observed that model choice has a significant impact on performance for the different types of artifacts. In a case study on two SE tasks, we found that such informed choice of similarity measure indeed leads to improved performance of the SE tools.
Given a project as a query, this task tries to find functionally similar projects from GitHub. A ranked list of projects is retrieved with the most relevant projects at the top.
Personalization provides more effective, useful and relevant search results. However, it also has the potential risk of revealing users’ privacy by identifying their underlying intention from their logged search behaviors. To address this privacy issue, we proposed a Topic-based Privacy Protection solution on client side
We proposed a solution to the complex query image retrieval by leveraging the state-of-the-art semantic query parser and detailed image captioning in the form of the scene graph. We confirm the effectiveness of our proposed model by evaluating on Visual Genome image dataset.
We proposed an approach for code completion task leveraging language model and code template. Our preliminary result shows the effectiveness of using Neural Network based language models such as RNN, LSTM, Bidirectional LSTM over n-gram based language model.