Posts by Collection

portfolio

publications

Generating Elevation Surface from a single RGB remotely sensed image using Deep Learning

Published in Remote Sensing, 2020

End-to-end approach to construct a remotely sensed image to elevation surface mapping using Conditional Generative Adversarial Networks

Recommended citation: Panagiotou E, Chochlakis G, Grammatikopoulos L, Charou E. Generating Elevation Surface from a Single RGB Remotely Sensed Image Using Deep Learning. Remote Sensing. 2020; 12(12):2002. https://www.mdpi.com/2072-4292/12/12/2002/pdf

CLiMB: A Continual Learning Benchmark for Vision-and-Language Tasks

Published in NeurIPS, 2022

Continual learning is a challenging learning setting, but has been underexplored in the vision-and-language domain. We introduce CLiMB🧗, the Continual Learning in Multimodality Benchmark, to enable the development of multimodal models that learn continually.

Recommended citation: Srinivasan, T., Chang, T. Y., Alva, L. L. P., Chochlakis, G., Rostami, M., & Thomason, J. (2022). CLiMB: A Continual Learning Benchmark for Vision-and-Language Tasks. Thirty-sixth Conference on Neural Information Processing Systems. https://arxiv.org/abs/2206.09059

VAuLT: Augmenting the Vision-and-Language Transformer for Sentiment Classification on Social Media

Under review

We propose the Vision-and-Augmented-Language Transformer (VAuLT). VAuLT is an extension of the popular Vision-and-Language Transformer (ViLT), with the key insight being to propagate the output representations of a large language model like BERT to the language input of ViLT.

Recommended citation: Chochlakis, G.; Srinivasan, T.; Thomason, J.; and Narayanan, S. 2022. VAuLT: Augmenting the Vision-and-Language Transformer for Sentiment Classification on Social Media. arXiv preprint arXiv:2208.09021. https://arxiv.org/abs/2208.09021

Leveraging Label Correlations in a Multi-label Setting: A Case Study in Emotion

To appear in ICASSP, 2023

We develop two modeling approaches to emotion recognition in order to capture word associations of the emotion words themselves, by either including the emotions in the input, or by leveraging Masked Language Modeling (MLM). Second, we integrate pairwise constraints of emotion representations as regularization terms alongside the classification loss of the models.

Recommended citation: Chochlakis, G., Mahajan, G., Baruah, S., Burghardt, K., Lerman, K. and Narayanan, S., 2022. Leveraging Label Correlations in a Multi-label Setting: A Case Study in Emotion. arXiv preprint arXiv:2210.15842. https://arxiv.org/abs/2210.15842

Using Emotion Embeddings to Transfer Knowledge Between Emotions, Languages, and Annotation Formats

To appear in ICASSP, 2023

In this work, we study how we can build a single emotion recognition model that can transition between different configurations, i.e., languages, emotions, and annotation formats, by leveraging multilingual models and Demux.

Recommended citation: Chochlakis, G., Mahajan, G., Baruah, S., Burghardt, K., Lerman, K. and Narayanan, S., 2022. Using Emotion Embeddings to Transfer Knowledge Between Emotions, Languages, and Annotation Formats. arXiv preprint arXiv:2211.00171. https://arxiv.org/abs/2211.00171

talks

teaching

Teaching experience 1

Undergraduate course, University 1, Department, 2014

This is a description of a teaching experience. You can use markdown like any other post.

Teaching experience 2

Workshop, University 1, Department, 2015

This is a description of a teaching experience. You can use markdown like any other post.