Hi there, I'm Anirudh Thatipelli!

I am an MS CS student at UC Riverside. Previously, I worked as a research intern at Computer Vision Lab at MBZUAI. I am fortunate to be advised by Dr. Sanath Narayan and Dr. Fahad Shahbaz Khan. At MBZUAI, I am working on research projects dealing with Few-shot learning and Action Recognition. Before this, I also spent some fantastic time at CVIT, IIIT Hyderabad where I worked under Ravi Kiran Sarvadevabhatla on the problem of skeleton action recognition. In summer of 2018, I have also worked on the topic of medical imaging under Jayanthi Sivaswamy. I briefly worked for Dell Internation Services as a software development intern. I am glad to be surrounded by excellent collaborators and mentors who helped me push beyond my boundaries.

Email  /  Google Scholar  /  Twitter  /  Github  /  Linkedin  /  Resume/CV

I have started this blog to share and gain knowledge. I will be posting about my projects, interests and the stuff I do.

I have added question papers, assignments of my courses at Shiv Nadar University.

Research

I am broadly interested in developing models that can learn from limited data and few training samples. Most of my research is developing such models for the task of Action Recognition.

Spatio-temporal Relation Modeling for Few-shot Action Recognition
Anirudh Thatipelli, Sanath Narayan, Salman Khan, Rao Muhammad Anwer, Fahad Shahbaz Khan, Bernard Ghanem
CVPR 2022
paper / code
  • Description: Proposed a novel spatio-temporal enrichment module, STRM for the problem of few-shot action recognition.
  • Outcome: Improved state-of-the-art performance on the challenging dataset of SSv2 by 3.5%.
Quo Vadis, Skeleton Action Recognition ?
Pranay Guptan, Anirudh Thatipelli, Aditya Aggarwal, Shubh Maheshwari, Neel Trivedi, Sourav Das Ravi Kiran Sarvadevabhatla
International Journal of Computer Vision (IJCV), Special Issue on Human pose, Motion, Activities and Shape in 3D, 2021
paper / code
  • Description: In this work, we study current and upcoming frontiers across the landscape of skeleton-based human action recognition. We benchmark state-of-the-art models on the NTU-120 dataset and provide a multi-layered assessment.
  • Outcome: We introduced Skeletics-152 and Skeleton-Mimetics datasets. Our results reveal the challenges and domain gap induced by actions 'in the wild' videos.
NTU-X: An Enhanced Large-scale Dataset for Improving Pose-based Recognition of Subtle Human Actions
Neel Trivedi, Anirudh Thatipelli, Ravi Kiran Sarvadevabhatla
[ORAL] 12th Indian Conference on Computer Vision, Graphics and Image Processing(ICVGIP, 2021)
paper / code
  • Description:In addition to the 25 body joints for each skeleton as in NTU-RGBD, NTU60-X and NTU120-X dataset includes finger and facial joints, enabling a richer skeleton representation. We appropriately modify the state of the art approaches to enable training using the introduced datasets. Our results demonstrate the effectiveness of these NTU-X datasets in overcoming the aforementioned bottleneck and improve state of the art performance, overall and on previously worst performing action categories.
  • Outcome: Our results demonstrate the effectiveness of these NTU-X datasets in overcoming the aforementioned bottleneck and improve state of the art performance, overall and on previously worst performing action categories.

Built this website using Guide and here