In this page you will find everything you need to know about my research interests and the work I’m carrying out in my daily activities as a researcher at Unibo. I’m always looking for collaboration and open for discussion. Fell free to drop me a line at any time! ;-)

  Research interests   |   Open projects  |  Past projects

Research interests

I have a strong background in Parallel Computing and Data Analytics. My interest has then evolved towards the huge field of Artificial Intelligence, having always been intrigued by the idea of building machines with superhuman abilities. I am now fully engaged in the study of deep learning, continual/lifelong learning, knowledge transfer and distillation, artificial synaptic plasticity and their applications.

The long term goal of my research aims to answer the following questions:

  • How much biological learning systems can inspire us to build better machines and better learning algorithms? What is the right level of abstraction? Can recent advances in Neuroscience provide useful insights to better understand what intelligence really is and design smart algorithms accordingly?

  • What does it really mean Unsupervised Learning? How much is it important for a learning algorithm? Is it the main feature our brain uses to solve almost any new problem it encounters?

  • How much incremental and continual learning philosophies should be embraced? Are them useful to help generalization and construct a sophisticate understanding of the external world?

  • How currently models can be scaled and shaped towards a single and flexible universal learning algorithm? How to automatically discover classes and adjust the architecture to solve a task previously unknown?

Open projects

I’m currently working on the following projects:

  • Continual Learning Algorithms for Deep Architectures. Learning continually with deep architectures is an hard task. Gradient-based architectures especially suffer from a problem known in litterature as catastrophic forgetting, where previously acquired knowledge and skills are suddenly erased to make space to the new ones. The main goal of this project is to develop new continual learning techniques for deep architectures, including sophisticated artificial synaptic plasticity and neural knowledge distillation.

  • New Benchmarks and Evaluation Protocols for Continual Learning. Evaluation of continual learning algorithms is still difficult and unclear nowasays, as classical machine learning metrics and protocols as well as benchmarks are often not enough to compare, evaluate and prototype new continual learning algorithms. The main goal of this project is to create new benchmarks and evaluations schemes that can help researchers work better in this area.

  • Continual Deep Learning Applications. The main goal of this project is to transfer fundamental research carried out in the context of continual learning to real-world problems. Interesting areas of applications include continuous production systems, internet-of-things, virtual agents, robotics and embedded systems.

Past projects

In the past I’ve worked on the following projects:

  • Smartphones sensors based methods for transportation mode detection. In this more applicative project the main goal is to detect the mobility of the users based only on (low battery consumption) sensors embedded in their mobile phone. The streaming nature of the training data in this task, makes it the natural playground to test our Continuous/Lifelong Machine Learning and Deep Learning algorithms.

  • Evaluating and comparing HTMs and CNNs in continual/lifelong learning scenarios. I’m currently working to evaluate and compare HTMs and CNNs in continual/lifelong learning scenarios. This is intriguing for the very nature of the task, which has a biological plausibility. We are primarily focused on tasks in Computer Vision which include temporal coherent data stream.

  • Application of AI and Continual Learning to Software Engineering. Software Engineering is not one of the first field which pops in mind when thinking at the application of AI algorithms. Yet, many recent works have shown that, for specific tasks such as user-stories disambiguation, bug severity prediction, auto-bug repair, etc.. ML systems can be very useful. In this project the idea was to propose a novel, flexible and extensible AI framework for Agile-based continuous development projects which can learn and improve after deployment, adapting and refining its prediction capabilities based on previous development cycle.

  • Scaling up the HTM algorithm. This was a project specifically related to the HTM algorithm. Despite CNNs, the HTM algorithm is still int its infancy. The main effort was focused on scaling up both the algorithm and the implementation to work with images greater than 64x64 pixels and many input channels.

  • Exporting our computer vision experiments to the Nao Robot platform. Another important project we’ve worked on (also for teaching purpose) concerns the validation of our algorithms in a robotic context. Since we are mainly interested in biologically inspired deep learning methods, this was the natural validation step of our research.

More information about my past projects as a graduate and PhD student are available at my Linkedin page!