publications
2022
-
WACVDoes Data Repair Lead to Fair Models? Curating Contextually Fair Data To Reduce Model BiasAgarwal, Sharat, Muku, Sumanyu, Anand, Saket, and Arora, ChetanIn 2022 IEEE Winter Conference on Applications of Computer Vision (WACV) 2022
Contextual information is a valuable cue for Deep Neural Networks (DNNs) to learn better representations and improve accuracy. However, co-occurrence bias in the training dataset may hamper a DNN model’s generalizability to unseen scenarios in the real world. For example, in COCO, many object categories have a much higher co-occurrence with men compared to women, which can bias a DNN’s prediction in favor of men. Recent works have focused on task-specific training strategies to handle bias in such scenarios, but fixing the available data is often ignored. In this paper, we propose a novel and more generic solution to address the contextual bias in the datasets by selecting a subset of the samples, which is fair in terms of the co-occurrence with various classes for a protected attribute. We introduce a data repair algorithm using the coefficient of variation, which can curate fair and contextually balanced data for a protected class(es). This helps in training a fair model irrespective of the task, architecture or training methodology. Our proposed solution is simple, effective, and can even be used in an active learning setting where the data labels are not present or being generated incrementally. We demonstrate the effectiveness of our algorithm for the task of object detection and multi-label image classification across different datasets. Through a series of experiments, we validate that curating contextually fair data helps make model predictions fair by balancing the true positive rate for the protected class across groups without compromising on the model’s overall performance.
2021
-
Eur. RadiolArtificial Intelligence–assisted chest X-ray assessment scheme for COVID-19Rangarajan, Krithika, Muku, Sumanyu, Garg, Amit Kumar, Gabra, Pavan, Shankar, Sujay Halkur, Nischal, Neeraj, Soni, Kapil Dev, Bhalla, Ashu Seith, Mohan, Anant, Tiwari, Pawan, Bhatnagar, Sushma, Bansal, Raghav, Kumar, Atin, Gamanagati, Shivanand, Aggarwal, Richa, Baitha, Upendra, Biswas, Ashutosh, Kumar, Arvind, Jorwal, Pankaj, Shalimar, , Shariff, A., Wig, Naveet, Subramanium, Rajeshwari, Trikha, Anjan, Malhotra, Rajesh, Guleria, Randeep, Namboodiri, Vinay, Banerjee, Subhashis, and Arora, ChetanEuropean Radiology 2021
To study whether a trained convolutional neural network (CNN) can be of assistance to radiologists in differentiating Coronavirus disease (COVID)–positive from COVID-negative patients using chest X-ray (CXR) through an ambispective clinical study. To identify subgroups of patients where artificial intelligence (AI) can be of particular value and analyse what imaging features may have contributed to the performance of AI by means of visualisation techniques.
2019
-
CoRRA Study of Black Box Adversarial Attacks in Computer Vision ModelsBhambri, Siddhant, Muku, Sumanyu, Tulasi, Avinash, and Buduru, Arun BalajiCoRR 2019
Machine learning has seen tremendous advances in the past few years, which has lead to deep learning models being deployed in varied applications of day-to-day life. Attacks on such models using perturbations, particularly in real-life scenarios, pose a severe challenge to their applicability, pushing research into the direction which aims to enhance the robustness of these models. After the introduction of these perturbations by Szegedy et al. [1], significant amount of research has focused on the reliability of such models, primarily in two aspects - white-box, where the adversary has access to the targeted model and related parameters; and the black-box, which resembles a real-life scenario with the adversary having almost no knowledge of the model to be attacked. To provide a comprehensive security cover, it is essential to identify, study, and build defenses against such attacks. Hence, in this paper, we propose to present a comprehensive comparative study of various black-box adversarial attacks and defense techniques.