Event Will Start In

Call for Papers


Abstract submission by 1 December 2023

Paper submission by 5 January 2024
Extended! Paper submission by 15 20th January 2024
Notification of acceptance of papers by 12 February 2024

Camera-ready papers by 31 March 2024
CoNGA24' papers will be published on Springer soon!

CoNGA'24 invites original contributions on topics related to next generation arithmetic, specifically universal number (unum) and posit arithmetic, including, but not limited to:

  • Posit & Valid Arithmetic
  • Artificial Intelligence Data Formats
  • Energy Efficient Arithmetic
  • Right Sizing Precision

Authors are invited to submit manuscripts of original, unpublished research and recent development in next generation arithmetic (NGA).
The submitted manuscripts can be up to 16 pages, written in English and formatted according to the Springer’s LNCS format's templates without changing default margins, fonts, etc. .
Submissions will be accepted electronically via EasyChair.
Accepted manuscripts will be published as a part of Lecture Notes in Computer Science (LNCS) revised post-proceedings volume. CoNGA'23 volume can be found here.

Authors should use the proceedings templates, either for LaTeX or for Word, for the preparation of their papers. Supplementary materials that facilitate verification of the results, e.g., source code, proof details, etc., may be appended without a page limit or uploaded as separate files, but reviewers are neither required to read them nor will they be printed in the proceedings. Hence submissions must be complete, intelligible and self-contained within the 16 pages limit. Papers should have page numbers to facilitate their review. In LaTeX, this can be achieved for instance using\pagestyle{plain}. Each submission must be a single PDF file. Springer encourages authors to include their ORCIDs in their papers. Springer also offers a feature for the inclusion of embedded videos in proceedings papers.

In addition, the corresponding author of each paper, acting on behalf of all of the authors of that paper, must complete and sign a Consent-to-Publish form. The corresponding author signing the copyright form should match the corresponding author marked on the paper. Once the files have been sent to Springer, changes relating to the authorship of the papers cannot be made. Please refer to Information for Authors of Springer Computer Science Proceedings for additional information. Papers should present original research and should provide sufficient background material to make them accessible to the broader community. It should not be submitted in parallel to any other conference or journal. All manuscripts will be reviewed and judged on correctness, originality, technical strength, and significance, quality of presentation, and interest and relevance to the conference. At least one author of an accepted paper should be available to present the paper at CoNGA'24.

For more details on the CoNGA'24, please visit the EasyChair page for CoNGA 2024. Kindly contact us at contact@posithub.org for any clarifications.

About CoNGA'24

Conference on Next Generation Arithmetic (CoNGA) is the leading conference on emerging technologies for computer arithmetic. The demands of both AI and HPC have led the community to realize that something better than traditional floating-point arithmetic is needed to reach the speed, accuracy, and energy-efficiency that are needed for today's most challenging workloads. In particular, posit arithmetic is achieving rapid adoption as a non-proprietary format, but CoNGA welcomes papers about any arithmetic format that breaks from the past and shows merit and promise.


General Chair

John Gustafson
Arizona State University

Marek Michalewicz
National Supercomputing Centre, Singapore

Program Chair

Himeshi De Silva
A*STAR, Singapore

Program Committee

Andrew Shewmaker
OpenEye Scientific

Cerlane Leong
Swiss National Supercomputing Centre

Chung Shin Yee
National Supercomputing Centre, Singapore

Glenn Matlin
Georgia Institute of Technology

Himeshi De Silva

John Gustafson
Arizona State University

Marek Michalewicz
National Supercomputing Centre, Singapore

Marco Cococcioni
University of Pisa

Peter Lindstrom
Lawrence Livermore National Laboratory

Roman Iakymchuk
Umeå Universitet

Akshat Ramachandran
Georgia Institute of Technology

Hauke Rehr
Friedrich Schiller University

Laslo Hunhold
University of Cologne

Vassil Dimitrov
Lemurian Labs

Raul Murillo
Complutense University of Madrid

Guillermo Botella
Universidad Complutense de Madrid

Local Organisers

To be confirmed

Virtual Event

20 February 2024

Event Schedule

Single click on the talk you are interested in to find the respective abstract and slides.

Timezones shown are Mountain Standard Time (MST), Central European Time (CET), India Standard Time (IST) and Australian Eastern Daylight Time (AEDT).

John will kickstart the conference by giving us highlights of CoNGA'24.

In the dynamic realm of artificial intelligence, the quest for more efficient and precise deep learning models stands as an enduring challenge. Low-Bit Encoding presents a transformative approach to neural network representation by reducing their precision to lower-bit formats, thereby substantially curtailing memory usage and computational demands.
Vision Transformer (ViT) have garnered substantial attention at the intersection of Transformers and Computer Vision. However, ViT’s are computationally expensive when dealing with large-scale models and datasets, and encoding ViT with low-bit arithmetic has significant advantages in terms of computational efficiency and resource requirements.
This paper introduces a methodology for low-bit encoding of Vision Transformers, employing Posit Arithmetic, a novel numerical system acclaimed for its precision, reduced execution time, storage efficiency, and diminished power consumption in comparison to traditional floating-point numbers. Vision Transformer (ViT) serves as a specific use case in this study to empirically evaluate the efficiency and viability of our proposed low-bit encoding scheme.
We conducted a comparative study involving low-precision Posit-empowered Stacked Vision Transformer (Stacked ViT), Stacked Convolutional Neural Networks (Stacked CNNs), and conventional ViT. Our findings reveal that the Stacked Vision Transformer competes effectively with the Stacked Convolutional Neural Network baseline. This underscores the potential of low-bit encoding using Posit as a compelling solution for AI applications that must adhere to stringent computational constraints.
Through meticulous experimentation and comprehensive evaluation, this paper unveils the remarkable advantages of low-bit encoding of ViT, offering new insights into the future of efficient and accurate deep learning models.

Simulations provide a powerful means to explore, analyze, and understand complex systems, allowing us to make informed decisions across various domains. Fields like genomics, physics, and climate science produce massive processed data, requiring a robust storage infrastructure beyond traditional systems. The Memory Wall, the growing gap between computation speed and data access time, creates a bottleneck. While innovative solutions exist to counter this issue, they often require infrastructure changes or apply to specific algorithms only. Reducing memory size alone sacrifices precision, making it inadequate, and custom datatypes tailored to specific applications face compatibility issues with legacy software and established standards.
Posit, a novel datatype, offers comparable sizes to standard IEEE 754 but with higher precision. Analyzing elementary systems like a simple pendulum using posits provides insights into metrics like computational errors and system stability, which allows us to make informed predictions about complex simulations when posits are employed.
By examining multiple states of the simple pendulum system, we observe that using posits results in significantly lower errors compared to floats and exhibit behaviour closer to the ideal state of a simple pendulum system. Thus highlighting the effectiveness of posits in capturing the intricate dynamics of systems like the pendulum better than its competitors with similar sizes, thereby enhancing the accuracy of simulations and bringing us closer to understanding the true behaviour of natural phenomena while simultaneously dealing with memory constraints.

Posits are being considered as a viable alternative to IEEE-754 floating-point numbers, which suffer from various issues. Although the posit encoding scheme proves generally more efficient for floating-point numbers with small exponents than the latter, its effectiveness diminishes with larger exponents, leading to inefficient encodings and an overall insufficient dynamic range.
This paper critically examines the optimal dynamic range that a number system should possess and conducts a comparative analysis of IEEE-754 floating-point numbers and posits in terms of their performance. Subsequently, a novel posit-like format is proposed, featuring bit-optimal variable-length exponent encoding and a bounded dynamic range. This new format ensures a constant dynamic range irrespective of bit-length and is demonstrated to surpass existing alternatives, simultaneously addressing multiple issues previously identified with posits.

Enjoy the break!

The Sets Of Real Numbers (SORN) format is an interval-based number representation to perform fast and low-complex arithmetic operations. Since the implemented arithmetic is based on lookup tables, the applied SORN datatypes are not standardized and can be highly application specific. Because the formats precision is rather low in general, the evaluation of suitable SORN datatypes is one of the major challenges when applying the format, since not all datatypes guarantee sufficient results for the implemented algorithms. Therefore, this paper presents an algorithmic approach to determine the optimized interval distribution for a SORN datatype for specific applications. The Adaptive Interval Segmentation (AIS) algorithm is gradient based and applies directional nested intervals to adapt a floating point functionality by SORN arithmetic. This approach is used to evaluate SORN datatypes for Hybrid SORN k-Nearest Neighbor (kNN) classification. For the MNIST dataset, the AIS algorithm provides seven SORN datatypes that show better classification results for Hybrid SORN kNN classification, compared to floating-point implementations. This is particular evident in a four and a five bit SORN datatype leading to an accuracy increase of 0.24% and 0.26%, respectively.

This study examines the mixed-precision iterative refinement technique using posit numbers instead of standard IEEE floating-point. The process is applied to a general linear system Ax = b where A is a large dense matrix. Row and column equilibration scale matrix entries into higher-density regions of machine numbers before performing the O(n³) factorization operations. Low-precision LU factorization followed by forward/backward substitution yields an initial estimate. The residual r = b - Ax is computed in a higher precision configuration and then used as the right-hand side in a new linear system Ac = r. The corrector c is calculated and used to refine the solution. Results show a 16-bit posit configuration coupled with equilibration yields accuracy comparable to IEEE half-precision (FP16), showing potential for balancing efficiency and accuracy.

See you at CoNGA'25!

Location: TBD

My career spans the period from well before there was any kind of standard for floating-point arithmetic to the present era, in which hardware support for the full IEEE 754 Standard is almost extinct and we are returning to the state where floats behave differently on different processors… in other words, the IEEE 754 represents a guideline but not a standard. Over the last four decades,I have accumulated quite a bit of surprising information about how the IEEE 754 Standard came to be and what is now causing its decline, through direct conversations with the decision makers. That information is not published anywhere. This talk will explain the little-known reasoning behind so many of the choices that went into defining IEEE 754, dispel widely-held myths about it, and show how we can learn to avoid its mistakes in creating a new generation of formats for representing real numbers.


Arithmetic accelerators are always in demand for fast computations and logic operations. Here, posit arithmetic plays an important role; it outperforms the traditional IEEE-754 floating-point in terms of accuracy and dynamic range. This paper proposes efficient sequential architectures for posit adder/subtractor and multiplier that work as per the desired bit size of operands. Here, 32-bit architecture with different exponent size ES has been designed with a control unit. FPGA implementations of these architectures have been done on the Xilinx Virtex-7 xc7vx330t-3ffg1157 and Zynq UltraScale+ MPSoC ZCU102 device. In comparison with existing work, it is observed that the datapath delay is lowered by 64.64% for 32-bit adders and 52.66% for a 32-bit multiplier on the Virtex-7 FPGA device. In addition, the area-delay (AD) product is reduced by 52.69% and 69.30% for the 32-bit posit adder and multiplier, respectively. Also, the proposed design has reduced dynamic power than existing architectures.


The article is focused on the Sets-of-Real-Numbers (SORN) format for digital arithmetic and signal processing. The format derives from the universal numbers (unum) and has already proven to be a valuable alternative to legacy formats like fixed point or floating point. The obvious and, thus, main challenge of SORN arithmetic is degenerating accuracy due to increasing intervals widths. The authors consider three fused SORN arithmetic operations such as addition, multiplication and multiply-add and proposed an approach to reduce the interval growth as well as to improve their accuracy. The evaluation on accuracy and hardware performance show that the accuracy can be improved for up to 60% while the hardware complexity demonstrates from moderate to high increase.


Enjoy the break!

Error resilience in neural networks has allowed for the adoption of low-precision floating-point (float) representations for mixed-precision training to improve efficiency. 16-bit float representations were the first to be experimented with for this purpose. Although, the IEEE 754 standard had long defined a 16-bit float representation, several other alternatives targeting mixed-precision training have also emerged. However, their varying numerical properties and differing hardware characteristics among other things make them more or less suitable for the task. Therefore, there is no clear choice of a 16-bit floating-point representation for neural network training that is commonly accepted. In this work, we evaluate all proposed 16-bit float variants and upcoming posit number representations on a set of Convolutional Neural Networks (CNNs) and other benchmarks to compare their suitability for neural network training. Posits generally achieve better results, indicating that their non-uniform accuracy distribution is more conducive for the training task. Our analysis suggests that instead of having the same accuracy for all weight values, as is the case with floats, having more accuracy for the weights with larger magnitude and higher frequency improves the training accuracy, thereby challenging previously held assumptions while bringing new insight into the dynamic range and precision requirements. We also evaluate their efficiency on hardware for mixed-precision training based on FPGA implementations. Finally, we propose the use of the distribution of network weight values as a heuristic for selecting the number representation to be used.


Various Kalman filter mechanizations are proposed in the literature to minimize numerical instabilities due to floating point rounding and precision in computers. Theoretically all these mechanizations should lead to identical results but in practice, numerical differences arise due to floating point representation used to perform the computations. The IEEE754 floating point format is the most widely used representation in computers today, because of which we focus on the numerical issues arising in Kalman filters due to this format. These numerical issues can cause the filter to diverge (for example the positive definiteness of the covariance matrix is lost) which is undesirable, especially in a real-time, mission critical hardware. In this work, we study the applicability of Posit, which is an alternative floating point representation, with same bit length (compared to IEEE 754 floats) to improve the stability of Kalman filters. In this work we show the benefits of using Posit over IEEE floating points and support the claim with a few case studies.


Conventionally, the front-end Digital Signal Processing (DSP) for applications in radio astronomy employed low-precision fixed-point arithmetic. However, the next-generation large-scale projects for radio astronomy such as the Square Kilometre Array (SKA), Atacama Large Millimeter/sub-millimeter Array (ALMA) upgrade and the proposed next-generation Very Large Array (ngVLA) have ambitious science goals that require higher sensitivities that in turn require high-precision arithmetic implementations. Also, the increasing strength, bandwidth and number of sources of Radio Frequency Interference (RFI) exacerbate the need for high-precision arithmetic. These factors lead to higher cost and power and longer design cycles for the DSP systems in radio astronomy. Meanwhile, hardware manufacturers are offering native support for low-precision floating-point number formats such as Float16 and BFloat16 and variants of those. In addition to those, ‘Posits’, a new floating-point representation has been introduced by John Gustafson and is claiming to offer better accuracy compared to Float16 under certain conditions. With these compact data formats, it is expected that signal processing systems to consume lower power and resources. For typical radio astronomical observations, the achievable sensitivity is determined by the ability to suppress RFI and the accuracy of delay correction. In the following, these two aspects are studied for the front-end DSP modules of the SKA correlator and beamformer where the coefficients are represented with Float16, BFloat16, variants of those formats and 16-bit Posits and compared against the current fixed-point representation.


The posit numeric format is getting more and more attention in recent years. Its tapered precision makes it especially suitable in many applications including machine learning computation. However, due to its dynamic component bit-width, the cost of implementing posit arithmetic in hardware is more expensive than its floating-point counterpart. To solve this cost problem, in this paper, approximate logarithmic designs for posit multiplication, division, and square root are proposed. It is found that approximate logarithmic units are more suitable for applications that tolerate large errors, such as machine learning algorithms, but require less power consumption.




Vojin G. Oklobdzija
Life Fellow IEEE, Professor Emeritus, University of California


Anantha Kinnal, Calligo Technologies Pvt Ltd, India

Ask Question?

Conference for Next Generation Arithmetic

How to contact the conference chairs and/or local organisers?
The conference chairs can be reached via contact@posithub.org for program related matters.

Sponsor & Co-located Conferences



CoNGA 2024 will be held as a virtual event. Registration is free and open to anyone interested in the latest cutting-edge research in Next Generation Arithmetic!

URL to register: https://conga24.clickmeeting.com/conga24-day-1/register

Get Inspired

Get inspired by novel advances in computer arthimetic!

Global Event

Join researchers and technologists around the world!


CoNGA 2024 will be held virtually!