Event Will Start In

Call for Papers (Completed)

Paper submission by 30 November 2022 (Closed)

Notification of acceptance of papers by 23 December 2022 (Closed)

Camera-ready papers by 16 January 2022 (Closed)


CoNGA'23 invites original contributions on topics related to next generation arithmetic, specifically universal number (unum) and posit arithmetic, including, but not limited to:


  • Posit & Valid Arithmetic
  • Artificial Intelligence Data Formats
  • Energy Efficient Arithmetic
  • Right Sizing Precision

Authors are invited to submit manuscripts of original, unpublished research and recent development in next generation arithmetic (NGA).
The submitted manuscripts can be up to 16 pages, written in English and formatted according to the Springer’s LNCS format's templates without changing default margins, fonts, etc. .
Submissions will be accepted electronically via EasyChair.
Accepted manuscripts will be published as a part of Lecture Notes in Computer Science (LNCS) revised post-proceedings volume. CoNGA'22 volume can be found here.



Authors should use the proceedings templates, either for LaTeX or for Word, for the preparation of their papers. Supplementary materials that facilitate verification of the results, e.g., source code, proof details, etc., may be appended without a page limit or uploaded as separate files, but reviewers are neither required to read them nor will they be printed in the proceedings. Hence submissions must be complete, intelligible and self-contained within the 16 pages limit. Papers should have page numbers to facilitate their review. In LaTeX, this can be achieved for instance using\pagestyle{plain}. Each submission must be a single PDF file. Springer encourages authors to include their ORCIDs in their papers. Springer also offers a feature for the inclusion of embedded videos in proceedings papers.


In addition, the corresponding author of each paper, acting on behalf of all of the authors of that paper, must complete and sign a Consent-to-Publish form. The corresponding author signing the copyright form should match the corresponding author marked on the paper. Once the files have been sent to Springer, changes relating to the authorship of the papers cannot be made. Please refer to Information for Authors of Springer Computer Science Proceedings for additional information. Papers should present original research and should provide sufficient background material to make them accessible to the broader community. It should not be submitted in parallel to any other conference or journal. All manuscripts will be reviewed and judged on correctness, originality, technical strength, and significance, quality of presentation, and interest and relevance to the conference. At least one author of an accepted paper should preferably be available to present the paper at CoNGA'23 (1–2 March 2023, Singapore) in person.


Kindly contact us at contact@posithub.org for any clarifications.

About CoNGA'23

Conference on Next Generation Arithmetic (CoNGA) is the leading conference on emerging technologies for computer arithmetic. The demands of both AI and HPC have led the community to realize that something better than traditional floating-point arithmetic is needed to reach the speed, accuracy, and energy-efficiency that are needed for today's most challenging workloads. In particular, posit arithmetic is achieving rapid adoption as a non-proprietary format, but CoNGA welcomes papers about any arithmetic format that breaks from the past and shows merit and promise.

CoNGA will be held in conjunction with Supercomputing Asia 2023 (SCA23), an annual international conference that encompasses an umbrella of notable supercomputing events with the key objective of promoting a vibrant and relevant HPC ecosystem in Asia. Co-organised by HPC centres from Australia, Japan, Thailand and Singapore, the SCA23 will be held from 28 February to 2 March 2023 in Singapore at Singapore Expo Convention & Exhibition Centre . In conjunction with SCA23, the HPC Asia 2023 conference will also be co-located with SCA23 in Singapore for the first time.

Chairs

General Chair

John Gustafson
Arizona State University


Marek Michalewicz
National Supercomputing Centre, Singapore
Sano Centre for Computational Personalised Medicine

Program Chair

Cerlane Leong
Swiss National Supercomputing Centre, ETH Zurich

Program Committee

Andrew Shewmaker
OpenEye Scientific

Chung Shin Yee
National Supercomputing Centre, Singapore

Himeshi De Silva
A*STAR

Marco Cococcioni
University of Pisa

Peter Lindstrom
Lawrence Livermore National Laboratory

Roman Iakymchuk
Umeå Universitet

Vassil Dimitry
Lemurian Labs

Local Organisers

Eugene Low
National Supercomputing Centre, Singapore

Singapore Expo Convention & Exhibition Centre

Singapore

1-2 March, 2023

11:15 AM – 6:00 PM

Networking Event

To be confirmed

Include Lunch & Breaks

Don’t miss it

Event Schedule

Please note that we have been informed that there is a change in dates for CoNGA from 27-28 February to 1-2 March. All CoNGA related talks will only begin after lunch to allow all CoNGA participants a chance to attend co-located event SCA's and HPC Asia's keynotes and plenary sessions. For participants who signed up forthe entire SCA conference (27 Feb-2 Mar), please note that the networking event is on 28 Feb. Please use the following discount code 'NSCC5' for standard registration to get a reduction of $150 off.
For accepted paper participants, please register yourself as authors and you will automatically get your $150 reduction.

Single click on the talk you are interested in to find the respective abstract and slides.

Location: TBD

John will kickstart the conference by giving us highlights of CoNGA'23.

[Slides]

Abstract of Keynote:

[Slides]

Abstract:
The Fast Fourier Transform (FFT) is required for chemistry, weather, defense, and signal processing for seismic exploration and radio astronomy. It is communication-bound, making supercomputers thousands of times slower at FFTs then at dense linear algebra. The key to accelerating FFTs is to minimize bits per datum without sacrificing accuracy. The 16-bit fixed point and IEEE float type lack sufficient accuracy for 1024- and 4096-point FFTs of data from analog-to-digital converters. We show that the 16-bit posit, with higher accuracy and larger dynamic range, can perform FFTs so accurately that a forward-inverse FFT restores the original signal perfectly. “Reversible” FFTs with posits are lossless, eliminating the need for 32-bit or higher precision. Similarly, 32-bit posit FFTs can replace 64-bit float FFTs for many HPC tasks. Speed, energy efficiency, and storage costs can thus be improved by 2× for a broad range of HPC workloads.

[Slides]

Enjoy the break!

Abstract:
This paper presents an optimization method to build the smallest possible integer mapping unit that can replace a conventional multiply-and-accumulate unit in deep learning applications. The unit is built using a hardware-software co-design strategy that minimizes the set of represented real values and energy consumed. We target larger and more complex deep learning applications domains than those explored in previous related works, namely generative models for image and text content. Our key result is that using our proposed method, we can produce a set as small as 4 entries for an image enhancement application, and 16-32 entries for the GPT2 model, all with minimal loss of quality. Experimental results show that a hardware accelerator designed using our approach can reduce the processing time up to 1.98x/3.62x and reduce computation energy consumed up to 1.7x/8.4x compared to 8-bit integer/16-bit floating-point alternatives, respectively.

[Slides]

Abstract:
Interval arithmetic is a well-known method for obtaining exact bounds on computational results even with inexact input data and numerical error introduced by floating-point numerics. The posit format, which aims to surpass the precision efficiency of the conventional IEEE 754 floating-point format, is accompanied by valids, an adaption and generalisation of interval arithmetic. A given calculation can be performed either with posits or with valids, yielding either an approximate result with high computational efficiency or rigorous lower and upper bounds on the result. However, Boolean relational predicates such as a < b are ambiguous when applied to possibly overlapping intervals, leading to logical inconsistency no matter how the ambiguity is resolved. As a consequence, a numerical routine written for scalars which has data-dependent branches can return incorrect results when applied to intervals or valids. This paper proposes to define relational predicates for interval types as set-valued predicates instead of Boolean predicates. The proposed relational predicates are logically consistent and have intuitive projections to the two-element Boolean algebra. Using these predicates, we can express a calculation with data-dependent branches such that it can operate either on scalars or on intervals, while easily constraining interval-valued comparands by the branch condition. With such interval-aware code we obtain either an approximate result or good interval bounds. We have developed a C++ library package which implements the proposed concepts for traditional interval arithmetic, and we adapted it to a posit and valid implementation, demonstrating the viability of the concept with both traditional and more recent interval formats

[Slides]

Abstract:
In this work, we present a novel method for directly computing functions of two real numbers using logic circuits without decoding; the real numbers are mapped to a particularly-chosen set of integer numbers. We theoretically prove that this mapping always exists and that we can implement any kind of binary operation between real numbers regardless of the encoding format. While the real numbers in the set can be arbitrary (rational, irrational, transcendental), we find practical applications to ultra-fast, low-power posit number arithmetic. We finally provide examples for decoding-free 4-bit Posit arithmetic.

[Slides] [Video]

Abstract:
This paper presents a new approach for support vector filtering to accelerate the training process of support vector machines (SVMs). It is based on the Sets-of-Real-Numbers (SORN) number format, which provides low complex and ultra fast computing. SORNs are an interval based binary number format, showing promising results for complex arithmetic operations, e.g. multiplication or fused multiply-add. To apply SORNs to high dimensional vector arithmetic, a combination of SORN arithmetic and fixed-point adder trees is used. This Hybrid SORN approach combines the advantages of SORNs, concerning reduction of computational costs and time, and fixed point adders in terms of precision. A Hybrid SORN support vector filtering architecture is implemented on an FPGA board with Zynq 7000 XC7Z100 SoC and evaluated for the MNIST dataset. It can be considered as hardware accelerator, reducing the training time by factor 1.38 for one-versus-rest and 2.65 for one-versus-one SVM implementation.

[Slides]

Location: TBD

Abstract:
My career spans the period from well before there was any kind of standard for floating-point arithmetic to the present era, in which hardware support for the full IEEE 754 Standard is almost extinct and we are returning to the state where floats behave differently on different processors… in other words, the IEEE 754 represents a guideline but not a standard. Over the last four decades,I have accumulated quite a bit of surprising information about how the IEEE 754 Standard came to be and what is now causing its decline, through direct conversations with the decision makers. That information is not published anywhere. This talk will explain the little-known reasoning behind so many of the choices that went into defining IEEE 754, dispel widely-held myths about it, and show how we can learn to avoid its mistakes in creating a new generation of formats for representing real numbers.

[Slides]

Abstract:
Arithmetic accelerators are always in demand for fast computations and logic operations. Here, posit arithmetic plays an important role; it outperforms the traditional IEEE-754 floating-point in terms of accuracy and dynamic range. This paper proposes efficient sequential architectures for posit adder/subtractor and multiplier that work as per the desired bit size of operands. Here, 32-bit architecture with different exponent size ES has been designed with a control unit. FPGA implementations of these architectures have been done on the Xilinx Virtex-7 xc7vx330t-3ffg1157 and Zynq UltraScale+ MPSoC ZCU102 device. In comparison with existing work, it is observed that the datapath delay is lowered by 64.64% for 32-bit adders and 52.66% for a 32-bit multiplier on the Virtex-7 FPGA device. In addition, the area-delay (AD) product is reduced by 52.69% and 69.30% for the 32-bit posit adder and multiplier, respectively. Also, the proposed design has reduced dynamic power than existing architectures.

[Slides]

Abstract:
The article is focused on the Sets-of-Real-Numbers (SORN) format for digital arithmetic and signal processing. The format derives from the universal numbers (unum) and has already proven to be a valuable alternative to legacy formats like fixed point or floating point. The obvious and, thus, main challenge of SORN arithmetic is degenerating accuracy due to increasing intervals widths. The authors consider three fused SORN arithmetic operations such as addition, multiplication and multiply-add and proposed an approach to reduce the interval growth as well as to improve their accuracy. The evaluation on accuracy and hardware performance show that the accuracy can be improved for up to 60% while the hardware complexity demonstrates from moderate to high increase.

[Slides]

Enjoy the break!

Abstract:
Error resilience in neural networks has allowed for the adoption of low-precision floating-point (float) representations for mixed-precision training to improve efficiency. 16-bit float representations were the first to be experimented with for this purpose. Although, the IEEE 754 standard had long defined a 16-bit float representation, several other alternatives targeting mixed-precision training have also emerged. However, their varying numerical properties and differing hardware characteristics among other things make them more or less suitable for the task. Therefore, there is no clear choice of a 16-bit floating-point representation for neural network training that is commonly accepted. In this work, we evaluate all proposed 16-bit float variants and upcoming posit number representations on a set of Convolutional Neural Networks (CNNs) and other benchmarks to compare their suitability for neural network training. Posits generally achieve better results, indicating that their non-uniform accuracy distribution is more conducive for the training task. Our analysis suggests that instead of having the same accuracy for all weight values, as is the case with floats, having more accuracy for the weights with larger magnitude and higher frequency improves the training accuracy, thereby challenging previously held assumptions while bringing new insight into the dynamic range and precision requirements. We also evaluate their efficiency on hardware for mixed-precision training based on FPGA implementations. Finally, we propose the use of the distribution of network weight values as a heuristic for selecting the number representation to be used.

[Slides]

Abstract:
Various Kalman filter mechanizations are proposed in the literature to minimize numerical instabilities due to floating point rounding and precision in computers. Theoretically all these mechanizations should lead to identical results but in practice, numerical differences arise due to floating point representation used to perform the computations. The IEEE754 floating point format is the most widely used representation in computers today, because of which we focus on the numerical issues arising in Kalman filters due to this format. These numerical issues can cause the filter to diverge (for example the positive definiteness of the covariance matrix is lost) which is undesirable, especially in a real-time, mission critical hardware. In this work, we study the applicability of Posit, which is an alternative floating point representation, with same bit length (compared to IEEE 754 floats) to improve the stability of Kalman filters. In this work we show the benefits of using Posit over IEEE floating points and support the claim with a few case studies.

[Slides]

Abstract:
Conventionally, the front-end Digital Signal Processing (DSP) for applications in radio astronomy employed low-precision fixed-point arithmetic. However, the next-generation large-scale projects for radio astronomy such as the Square Kilometre Array (SKA), Atacama Large Millimeter/sub-millimeter Array (ALMA) upgrade and the proposed next-generation Very Large Array (ngVLA) have ambitious science goals that require higher sensitivities that in turn require high-precision arithmetic implementations. Also, the increasing strength, bandwidth and number of sources of Radio Frequency Interference (RFI) exacerbate the need for high-precision arithmetic. These factors lead to higher cost and power and longer design cycles for the DSP systems in radio astronomy. Meanwhile, hardware manufacturers are offering native support for low-precision floating-point number formats such as Float16 and BFloat16 and variants of those. In addition to those, ‘Posits’, a new floating-point representation has been introduced by John Gustafson and is claiming to offer better accuracy compared to Float16 under certain conditions. With these compact data formats, it is expected that signal processing systems to consume lower power and resources. For typical radio astronomical observations, the achievable sensitivity is determined by the ability to suppress RFI and the accuracy of delay correction. In the following, these two aspects are studied for the front-end DSP modules of the SKA correlator and beamformer where the coefficients are represented with Float16, BFloat16, variants of those formats and 16-bit Posits and compared against the current fixed-point representation.

[Slides]

Abstract:
The posit numeric format is getting more and more attention in recent years. Its tapered precision makes it especially suitable in many applications including machine learning computation. However, due to its dynamic component bit-width, the cost of implementing posit arithmetic in hardware is more expensive than its floating-point counterpart. To solve this cost problem, in this paper, approximate logarithmic designs for posit multiplication, division, and square root are proposed. It is found that approximate logarithmic units are more suitable for applications that tolerate large errors, such as machine learning algorithms, but require less power consumption.

[Slides]

Speakers

CoNGA'23

Keynote

Benoît Dinechin
Kalray and European Processor Initiative

Keynote

John Gustafson
Arizona State University

Ask Question?

Conference for Next Generation Arithmetic

What are the COVID-19 guidelines in Singapore?
Please note that the guidelines are still evolving over time. Please make sure that you stay up-to-date before travelling. The current guidelines can be found here: https://www.ica.gov.sg/enter-transit-depart/entering-singapore
How to contact the conference chairs and/or local organisers?
The conference chairs can be reached via contact@posithub.org for program related matters.

For all other questions and clarifications, please contact the local organisers via papers@sc-asia.org and please indicate ‘<CoNGA23>’ in the subject header of your email.

Sponsor & Co-located Conferences

CoNGA'23

Registration

For participants who signed up for the entire SCA conference (27 Feb-2 Mar), please note that the networking event is on 28 Feb. Please use the following discount code 'NSCC5' for standard registration to get a reduction of $150 off.
For accepted paper participants, please register yourself as authors and you will automatically get your $150 reduction.

Fee and Registration Page

Get Inspired

Get inspired by novel advances in arthimetic!

Networking Session

Love to challenge the norm? Meet like-mind individuals!

Global Event

Join researchers and technologists around the world!

Venue