Posts by Collection

portfolio

publications

Generating Elevation Surface from a single RGB remotely sensed image using Deep Learning

Published in Remote Sensing, 2020

End-to-end approach to construct a remotely sensed image to elevation surface mapping using Conditional Generative Adversarial Networks

Recommended citation: Panagiotou E, Chochlakis G, Grammatikopoulos L, Charou E. Generating Elevation Surface from a Single RGB Remotely Sensed Image Using Deep Learning. Remote Sensing. 2020; 12(12):2002. https://www.mdpi.com/2072-4292/12/12/2002/pdf

CLiMB: A Continual Learning Benchmark for Vision-and-Language Tasks

Published in NeurIPS, 2022

Continual learning is a challenging learning setting, but has been underexplored in the vision-and-language domain. We introduce CLiMB🧗, the Continual Learning in Multimodality Benchmark, to enable the development of multimodal models that learn continually.

Recommended citation: Srinivasan, T., Chang, T. Y., Alva, L. L. P., Chochlakis, G., Rostami, M., & Thomason, J. (2022). CLiMB: A Continual Learning Benchmark for Vision-and-Language Tasks. Thirty-sixth Conference on Neural Information Processing Systems. https://arxiv.org/abs/2206.09059

VAuLT: Augmenting the Vision-and-Language Transformer for Sentiment Classification on Social Media

Posted on arXiv, 2022

We propose the Vision-and-Augmented-Language Transformer (VAuLT). VAuLT is an extension of the popular Vision-and-Language Transformer (ViLT), with the key insight being to propagate the output representations of a large language model like BERT to the language input of ViLT.

Recommended citation: Chochlakis, G.; Srinivasan, T.; Thomason, J.; and Narayanan, S. 2022. VAuLT: Augmenting the Vision-and-Language Transformer for Sentiment Classification on Social Media. arXiv preprint arXiv:2208.09021. https://arxiv.org/abs/2208.09021

Leveraging Label Correlations in a Multi-label Setting: A Case Study in Emotion

Published in ICASSP, 2023

We develop two modeling approaches to emotion recognition in order to capture word associations of the emotion words themselves, by either including the emotions in the input, or by leveraging Masked Language Modeling (MLM). Second, we integrate pairwise constraints of emotion representations as regularization terms alongside the classification loss of the models.

Recommended citation: Chochlakis, Georgios, Gireesh Mahajan, Sabyasachee Baruah, Keith Burghardt, Kristina Lerman, and Shrikanth Narayanan. "Leveraging label correlations in a multi-label setting: A case study in emotion." In ICASSP 2023-2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 1-5. IEEE, 2023. https://arxiv.org/abs/2210.15842

Using Emotion Embeddings to Transfer Knowledge Between Emotions, Languages, and Annotation Formats

Published in ICASSP, 2023

In this work, we study how we can build a single emotion recognition model that can transition between different configurations, i.e., languages, emotions, and annotation formats, by leveraging multilingual models and Demux.

Recommended citation: Chochlakis, Georgios, Gireesh Mahajan, Sabyasachee Baruah, Keith Burghardt, Kristina Lerman, and Shrikanth Narayanan. "Using Emotion Embeddings to Transfer Knowledge between Emotions, Languages, and Annotation Formats." In ICASSP 2023-2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 1-5. IEEE, 2023. https://arxiv.org/abs/2211.00171

The Strong Pull of Prior Knowledge in Large Language Models and Its Impact on Emotion Recognition

Under review

In this work, we design experiments and propose measurements to explicitly quantify the consistency of proxies of LLM priors and their pull on the posteriors. We show that LLMs have strong yet inconsistent priors in emotion recognition that ossify their predictions. We also find that the larger the model, the stronger these effects become. Our results suggest that caution is needed when using ICL with larger LLMs for affect-centered tasks outside their pre-training domain and when interpreting ICL results.

Recommended citation: Chochlakis, Georgios, Alexandros Potamianos, Kristina Lerman and Shrikanth Narayanan. “The Strong Pull of Prior Knowledge in Large Language Models and Its Impact on Emotion Recognition.” (2024). https://arxiv.org/abs/2403.17125

Socio-Linguistic Characteristics of Coordinated Inauthentic Accounts

To appear in ICWSM, 2024

In this work, we utilize heuristics to identify coordinated inauthentic accounts and detect attitudes, concerns and emotions within their social media posts, collectively known as socio-linguistic characteristics.

Recommended citation: Burghardt, Keith, Ashwin Rao, Siyi Guo, Zihao He, Georgios Chochlakis, Baruah Sabyasachee, Andrew Rojecki, Shri Narayanan, and Kristina Lerman. "Socio-Linguistic Characteristics of Coordinated Inauthentic Accounts." arXiv preprint arXiv:2305.11867 (2023). https://arxiv.org/abs/2305.11867

talks

teaching

Teaching experience 1

Undergraduate course, University 1, Department, 2014

This is a description of a teaching experience. You can use markdown like any other post.

Teaching experience 2

Workshop, University 1, Department, 2015

This is a description of a teaching experience. You can use markdown like any other post.