cfaed Publications

Automatic optimization for heterogeneous in-memory computing

Reference

Jeronimo Castrillon, "Automatic optimization for heterogeneous in-memory computing", In Focus Session, Design, Automation and Test in Europe Conference (DATE) (invited talk), Mar 2024.

Abstract

Fuelled by exciting advances in materials and devices, in-memory computing architectures now represent a promising avenue to advance computing systems. Plenty of manual designs have already demonstrated orders of magnitude improvement in compute efficiency compared to classical Von Neumann machines in different application domains. In this talk we discuss automation flows for programming and exploring the parameter space of in-memory architectures. We report on current efforts on building an extensible framework around the MLIR compiler infrastructure to abstract from individual technologies to foster re-use. Concretely, we present optimising flows for in-memory accelerators based on cross-bars, on content addressable memories and bulk-wise logic operations. We believe this kind of automation to be key to more quickly navigate the heterogeneous landscape of in-memory accelerators and to bring the benefits of emerging architectures to a boarder range of applications.

Bibtex

@Misc{castrillon_date2024,
author = {Castrillon, Jeronimo},
date = {2024-03},
title = {Automatic optimization for heterogeneous in-memory computing},
howpublished = {Focus Session, Design, Automation and Test in Europe Conference (DATE) (invited talk)},
location = {Valencia, Spain},
abstract = {Fuelled by exciting advances in materials and devices, in-memory computing architectures now represent a promising avenue to advance computing systems. Plenty of manual designs have already demonstrated orders of magnitude improvement in compute efficiency compared to classical Von Neumann machines in different application domains. In this talk we discuss automation flows for programming and exploring the parameter space of in-memory architectures. We report on current efforts on building an extensible framework around the MLIR compiler infrastructure to abstract from individual technologies to foster re-use. Concretely, we present optimising flows for in-memory accelerators based on cross-bars, on content addressable memories and bulk-wise logic operations. We believe this kind of automation to be key to more quickly navigate the heterogeneous landscape of in-memory accelerators and to bring the benefits of emerging architectures to a boarder range of applications.},
month = mar,
year = {2024},
}

Downloads

No Downloads available for this publication

Permalink

https://esim-project.eu/publications?pubId=3735


Go back to publications list