I am a faculty member in the Department of Computer Science and Automation (CSA) at Indian Institute of Science (IISc) Bangalore. My research focuses on building efficient, scalable, and sustainable AI systems through different techniques across the hardware–software stack. My work spans accelerator, compiler and algorithm design with the goal of enabling high-performance and energy-efficient AI.
Full list on Google Scholar | DBLP
RaPiD: AI Accelerator for Ultra-Low Precision Training and Inference
IEEE 48th Annual International Symposium on Computer Architecture (ISCA), 2021
DNNDaSher: A Compiler Framework for Dataflow-Compatible End-to-End Acceleration on IBM AIU
IEEE Micro, 2024
EMPIR: Ensembles of Mixed Precision Deep Networks for Increased Robustness Against Adversarial Attacks
International Conference on Learning Representations (ICLR), 2020
Efficacy of Pruning in Ultra-Low Precision DNNs
IEEE/ACM International Symposium on Low Power Electronics and Design (ISLPED), 2021
Approximate Computing for Long Short Term Memory (LSTM) Neural Networks
IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems (TCAD) , 2018
SparCE: Sparsity-Aware General-Purpose Core Extensions to Accelerate Deep Neural Networks
IEEE Transactions on Computers (TC) , 2018
Approximate Computing for Spiking Neural Networks
IEEE Design, Automation & Test in Europe Conference & Exhibition (DATE), pages , 2017
Deep Compression of Pre-Trained Transformer Models
Advances in Neural Information Processing Systems (NeuRIPS) , 2022
14.1 A Software-Assisted Peak Current Regulation Scheme to Improve Power-Limited Inference Performance in a 5nm AI SoC
IEEE International Solid-State Circuits Conference (ISSCC) , 2024
Sparsity Turns Adversarial: Energy and Latency Attacks on Deep Neural Networks
IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems (TCAD) , 2020
Efficiency Attacks on Spiking Neural Networks
ACM/IEEE Design Automation Conference (DAC) , 2022
A Case for Generalizable DNN Cost Models for Mobile Devices
IEEE International Symposium on Workload Characterization (IISWC) , 2020
Dynamic Spike Bundling for Energy-Efficient Spiking Neural Networks
IEEE/ACM International Symposium on Low Power Electronics and Design (ISLPED) , 2019
Axformer: Accuracy-Driven Approximation of Transformers for Faster, Smaller and More Accurate NLP Models
International Joint Conference on Neural Networks (IJCNN) , 2022
Family Of Lossy Sparse Load SIMD Instructions
US Patent 11,663,001, May 30 2023
Exploiting Fine-Grained Structured Weight Sparsity In Systolic Arrays
US Patent 11,941,111, March 26 2024
Sparse Systolic Array Design
US Patent 11,669,489, June 6 2023
Stickification using anywhere padding to accelerate data manipulation
US Patent 12,468,947, Nov 11 2025
Generating program binaries for a transformer to process sequence concatenations having different input lengths
US Patent App. 18/344,487
Reusing Weights and Biases in an Artificial Intelligence Accelerator for a Neural Network for Different Minibatch Sizes of Inferences
US Patent App. 18/344,491, Nov 11 2025
Deep Learning Optimization Through Zero Tile Manipulation
US Patent App. 18/448,390
Sanchari Sen is an Assistant Professor in the Department of Computer Science and Automation (CSA) at Indian Institute of Science (IISc) Bangalore. Her research focuses on building efficient, scalable, and sustainable AI systems through different techniques across the hardware–software stack.
Email: sancharisen[at]iisc.ac.in
Office:
CSA 234
Computer Science and Automation
Indian Institute of Science
Bangalore 560012
India
Links:
Google Scholar |
DBLP |
LinkedIn