Nicola Gabriele
Nicola Gabriele

PhD student

About Me

I am currently pursuing a PhD in Information and Communication Technologies at the University of Calabria, where my research is centered on advanced methodologies in artificial intelligence. I hold a Master’s degree in Computer Engineering, with a strong specialization in Artificial Intelligence and Machine Learning. My Master’s thesis focused on investigating the synergistic integration of Low-Rank Adaptation (LoRA) techniques and meta-learning frameworks to improve the performance of knowledge distillation processes involving large language models (LLMs). The primary objective of my work was to enhance both the computational efficiency and the predictive accuracy of distilled models, enabling more practical and scalable deployment of LLMs in resource-constrained environments. This research reflects my broader interest in optimizing AI systems for real-world applications through innovative and adaptive learning strategies.

Interests
  • Artificial Intelligence
  • Natural Language Processing
  • Machine & Deep Learning
  • High Performance Computing
  • Federated Learning
  • Edge-Cloud Computing
  • Knowledge Distillation
  • Meta-Learning
  • Computer Science
Education
  • Visiting PhD Student

    Barcelona Supercomputing Center

  • PhD in Information and Communication Technologies

    University of Calabria

  • Master Degree in Computer Engineering

    University of Calabria (IT)

  • Bachelor Degree in Computer Engineering

    University of Calabria (IT)

📚 My Research
My research primarily focuses on the development of machine learning and deep learning algorithms designed to operate efficiently across the edge-cloud continuum. This includes exploring strategies for optimizing performance, resource usage, and latency in distributed and heterogeneous computing environments. Additionally, I am involved in the development of methods for the efficient and environmentally sustainable fine-tuning of Large Language Models (LLMs), with the goal of reducing their computational and energy footprint. I also work on techniques to identify and mitigate biases in LLMs, aiming to improve their fairness, reliability, and ethical alignment.
Recent Publications
(2025). Is Reasoning What You Need to Mitigate Bias? A Study of Adversarial Robustness to Bias Elicitation in Large Reasoning Models. AEQUITAS 2025.
(2025). A Parameter-Efficient Approach to Distilling Large Language Models via Meta-Learning. CAIMA2025 workshop, part of the ADBIS2025 conference (September 23-26, 2025 - Tampere).
(2025). Federated Learning in the Edge-Cloud Continuum: A Task-Based Approach with Colony. HeteroPar 2025, 23rd International Workshop.