Drawing

AI4DEV 2023: First Workshop on AI Assisted Software Development for HPC

November 13th, 2023 (half day, 9:00am - 12:30pm MST)

Colorado Convention Center

Denver, Colorado, USA

Held in conjunction with SC23: The International Conference for High Performance Computing, Networking, Storage and Analysis

In cooperation with
IEEE CS


While scientific software is an important component in the pursuit of scientific discovery, software development in HPC continues to be challenging. The software development process today combines contributions from domain scientists, applied mathematicians, computer scientists, and involves complex programming models. As a result of these diverse contributions software environments have become significantly complicated.

With this increasing diversity, the complexity of software development increases and it requires a steep learning curve for new developers, resulting in a slower pace of software development. With the continuous integration of scientific applications in complex, deep software stacks (workflows, compilers, runtime libraries, heterogeneous systems) novel techniques and practical tools for assisting the software development in HPC are invaluable. Recent advances in generative AI and large language models, such as GitHub’s Copilot and OpenAI’s GPT, demonstrate interesting potential for developer assistance and automated code synthesis.

The goal of the AI Assisted Software Development for HPC (AI4DEV) workshop is to create a forum composed of researchers, scientists, application developers, computing center staff, and industry staff to discuss ideas on how artificial intelligence can help in the whole development process. The workshop will feature contributed papers and invited talks in the area.

Workshop Topics

Topics of interest include, but are not limited to:

  • Machine learning techniques to improve programming productivity
  • Performance analysis driven by AI and ML
  • Debugging and testing driven by ML/AI
  • ML-assisted compiler optimizations and code generation
  • Auto-tuning and performance portability using ML/AI
  • Code synthesis and generation using automated ML techniques
  • AI-assisted code recommendations for code maintainability, performance, and correctness
  • IDE extensions using ML for improved programming productivity
  • AI-assisted software building and deployment
  • Mining best programming practices using ML

Submissions and Format

Authors are invited to submit full papers or short papers in English structured as technical or experience papers. Full papers should have at least 6 pages but should not exceed 8 pages of content, including everything except references. Short papers should have at least 3 pages but should not exceed 4 pages including everything (references, figures, etc.). Submissions must use the ACM format. Latex users should use the latest template version 1.90 (last update April 4, 2023) with the “sigconf” option.

Submitted papers will be peer-reviewed by the Program Committee and accepted papers will be published by IEEE Xplore.

Submitted papers must represent original unpublished research that is not currently under review for any other venue. Papers not following these guidelines will be rejected without review. Submissions received after the due date, exceeding length limit, or not appropriately structured may also not be considered. At least one author of an accepted paper must register for and attend the workshop. Authors may contact the workshop organizers for more information. Papers should be submitted electronically at: https://submissions.supercomputing.org/.

SC Reproducibility Initiative

We encourage authors to submit an optional artifact description (AD) appendix along with their paper, describing the details of their software environments and computational experiments to the extent that an independent person could replicate their results. The AD appendix is not included in the 8-page limit of the paper and should not exceed 2 pages of content. For more details of the SC Reproducibility Initiative please see: https://sc23.supercomputing.org/program/papers/reproducibility-initiative/.

Proceedings

The proceedings will be archived in IEEE Xplore.

Important Dates

  • Paper submissions due: August 7th, 2023 Extended: August 13, 2023
  • Notification of acceptance: September 4th, 2023 Extended: September 8th, 2023
  • Camera-ready papers due: September 27, 2023

All time zones are AOE.

Workshop Date

  • Half-day Workshop
  • November 13th, 2023, 9:00am - 12:30pm MST

Organizers

Giorgis Georgakoudis, Lawrence Livermore National Laboratory
Ignacio Laguna, Lawrence Livermore National Laboratory
Konstantinos Parasyris, Lawrence Livermore National Laboratory

Program Committee

  • Jan Hückelheim, Argonne National Laboratory
  • Nikhil Jain, NVIDIA
  • Tarindu Jayatilaka, Purdue University
  • Dong Li, University of California, Merced
  • Chunhua Liao, Lawrence Livermore National Laboratory
  • Harshitha Menon, Lawrence Livermore National Laboratory
  • William S. Moses, Massachusetts Institute of Technology
  • Boyanna Norris, University of Oregon
  • EunJung (EJ) Park, Qualcomm Inc.
  • Pavlos Petoumenos, University of Manchester
  • Kento Sato, RIKEN
  • Keren Zhou, OpenAI

Venue

  • Colorado Convention Center, Denver, CO, USA
  • Room: 601

Program

Opening and Invited Talk (Chair: Ignacio Laguna)

9:00am - 9:10am: Opening remarks
9:10am - 10:00am: Invited Talk: AI-driven Performance Metaprogramming, Torsten Hoefler

Break

10:00am - 10:30am: Break

Papers Session (Chair: Konstantinos Parasyris)

10:30am - 10:50am: Paper 1: MPI-RICAL: Data-Driven MPI Distributed Parallelism Assistance with Transformers. Nadav Schneider, Tal Kadosh, Timothy Mattson, Yuval Pinter, Gal Oren
10:50am - 11:10am: Paper 2: VSCuda: LLM based CUDA extension for Visual Studio Code. Brian Chen, Nafis Mustakin, Alvin Hoang, Sakib Fuad, Daniel Wong

Invited Talks (Chair: Giorgis Georgakoudis)

11:10am - 11:50am: Invited Talk: LLVM in the age of LLMs: Machine Learning for IR and optimization and more, William S. Moses
11:50am - 12:30am: Invited Talk: Unlocking the Potential of Large Language Models for High-Performance Computing Code, Gal Oren

Contact Information

Please address workshop questions to Giorgis Georgakoudis (georgakoudis1@llnl.gov), Ignacio Laguna (ilaguna@llnl.gov), and/or Konstantinos Parasyris (parasyris1@llnl.gov).