**Uh-oh, it looks like your Internet Explorer is out of date.**

For a better shopping experience, please upgrade now.

# BigNum Math: Implementing Cryptographic Multiple Precision Arithmetic

## NOOK Book(eBook)

^{$}42.49

## Overview

Implementing cryptography requires integers of significant magnitude to resist cryptanalytic attacks. Modern programming languages only provide support for integers which are relatively small and single precision. The purpose of this text is to instruct the reader regarding how to implement efficient multiple precision algorithms.

Bignum math is the backbone of modern computer security algorithms. It is the ability to work with hundred-digit numbers efficiently using techniques that are both elegant and occasionally bizarre. This book introduces the reader to the concept of bignum algorithms and proceeds to build an entire library of functionality from the ground up. Through the use of theory, pseudo-code and actual fielded C source code the book explains each and every algorithm that goes into a modern bignum library. Excellent for the student as a learning tool and practitioner as a reference alike BigNum Math is for anyone with a background in computer science who has taken introductory level mathematic courses. The text is for students learning mathematics and cryptography as well as the practioner who needs a reference for any of the algorithms documented within.

* Complete coverage of Karatsuba Multiplication, the Barrett Algorithm, Toom-Cook 3-Way Multiplication, and More

* Tom St Denis is the developer of the industry standard cryptographic suite of tools called LibTom.

* This book provides step-by-step exercises to enforce concepts

## Product Details

ISBN-13: | 9780080488387 |
---|---|

Publisher: | Elsevier Science |

Publication date: | 08/18/2006 |

Sold by: | Barnes & Noble |

Format: | NOOK Book |

Pages: | 291 |

Sales rank: | 1,074,106 |

File size: | 10 MB |

## About the Author

## Read an Excerpt

#### BigNum Math

**IMPLEMENTING CRYPTOGRAPHIC MULTIPLE PRECISION ARITHMETIC**

**By Tom St Denis**

** Syngress **

**Copyright © 2006**

**Syngress Publishing, Inc.**

All right reserved.

All right reserved.

**ISBN: 978-0-08-048838-7**

#### Chapter One

**Introduction**

1.1 Multiple Precision Arithmetic

** 1.1.1 What Is Multiple Precision Arithmetic?**

When we think of long-hand arithmetic such as addition or multiplication, we rarely consider the fact that we instinctively raise or lower the precision of the numbers we are dealing with. For example, in decimal we almost immediately can reason that 7 times 6 is 42. However, 42 has two digits of precision as opposed to the one digit we started with. Further multiplications of say 3 result in a larger precision result 126. In these few examples we have multiple precisions for the numbers we are working with. Despite the various levels of precision, a single subset of algorithms can be designed to accommodate them.

By way of comparison, a fixed or single precision operation would lose precision on various operations. For example, in the decimal system with fixed precision 6 · 7 = 2.

Essentially, at the heart of computer–based multiple precision arithmetic are the same long-hand algorithms taught in schools to manually add, subtract, multiply, and divide.

**1.1.2 The Need for Multiple Precision Arithmetic**

The most prevalent need for multiple precision arithmetic, often referred to as "bignum" math, is within the implementation of public key cryptography algorithms. Algorithms such as RSA [10] and Diffie-Hellman [11] require integers of significant magnitude to resist known cryptanalytic attacks. For example, at the time of this writing a typical RSA modulus would be at least greater than 10^{309}. However, modern programming languages such as ISO C and Java only provide intrinsic support for integers that are relatively small and single precision.

The largest data type guaranteed to be provided by the ISO C programming language can only represent values up to 10^{19} as shown in Figure 1.1. On its own, the C language is insufficient to accommodate the magnitude required for the problem at hand. An RSA modulus of magnitude 10^{19} could be trivially factored on the average desktop computer, rendering any protocol based on the algorithm insecure. Multiple precision algorithms solve this problem by extending the range of representable integers while using single precision data types.

Most advancements in fast multiple precision arithmetic stem from the need for faster and more efficient cryptographic primitives. Faster modular reduction and exponentiation algorithms such as Barrett's reduction algorithm, which have appeared in various cryptographic journals, can render algorithms such as RSA and Diffie-Hellman more efficient. In fact, several major companies such as RSA Security, Certicom, and Entrust have built entire product lines on the implementation and deployment of efficient algorithms.

However, cryptography is not the only field of study that can benefit from fast multiple precision integer routines. Another auxiliary use of multiple precision integers is high precision floating point data types. The basic IEEE [12] standard floating point type is made up of an integer mantissa *q*, an exponent e, and a sign bit s. Numbers are given in the form *n = q · b ^{e}* · -1

*s*, where b = 2 is the most common base for IEEE. Since IEEE floating point is meant to be implemented in hardware, the precision of the mantissa is often fairly small (

*23, 48, and 64 bits*). The mantissa is merely an integer, and a multiple precision integer could be used to create a mantissa of much larger precision than hardware alone can efficiently support. This approach could be useful where scientific applications must minimize the total output error over long calculations.

Yet another use for large integers is within arithmetic on polynomials of large characteristic (i.e., *GF(p)[x*] for large p). In fact, the library discussed within this text has already been used to form a polynomial basis library.

**1.1.3 Benefits of Multiple Precision Arithmetic**

The benefit of multiple precision representations over single or fixed precision representations is that no precision is lost while representing the result of an operation that requires excess precision. For example, the product of two nbit integers requires at least 2*n* bits of precision to be represented faithfully. A multiple precision algorithm would augment the precision of the destination to accommodate the result, while a single precision system would truncate excess bits to maintain a fixed level of precision.

It is possible to implement algorithms that require large integers with fixed precision algorithms. For example, elliptic curve cryptography (ECC) is often implemented on smartcards by fixing the precision of the integers to the maximum size the system will ever need. Such an approach can lead to vastly simpler algorithms that can accommodate the integers required even if the host platform cannot natively accommodate them. However, as efficient as such an approach may be, the resulting source code is not normally very flexible. It cannot, at run time, accommodate inputs of higher magnitude than the designer anticipated.

Multiple precision algorithms have the most overhead of any style of arithmetic. For the the most part the overhead can be kept to a minimum with careful planning, but overall, it is not well suited for most memory starved platforms. However, multiple precision algorithms do offer the most flexibility in terms of the magnitude of the inputs. That is, the same algorithms based on multiple precision integers can accommodate any reasonable size input without the designer's explicit forethought. This leads to lower cost of ownership for the code, as it only has to be written and tested once.

**1.2 Purpose of This Text**

The purpose of this text is to instruct the reader regarding how to implement efficient multiple precision algorithms. That is, to explain a limited subset of the core theory behind the algorithms, and the various "housekeeping" elements that are neglected by authors of other texts on the subject. Several texts give considerably detailed explanations of the theoretical aspects of algorithms and often very little information regarding the practical implementation aspects.

In most cases, how an algorithm is explained and how it is actually implemented are two very different concepts. For example, the Handbook of Applied Cryptography (*HAC*), algorithm 14.7 on page 594, gives a relatively simple algorithm for performing multiple precision integer addition. However, the description lacks any discussion concerning the fact that the two integer inputs may be of differing magnitudes. As a result, the implementation is not as simple as the text would lead people to believe. Similarly, the division routine (*algorithm 14.20, pp. 598*) does not discuss how to handle sign or the dividend's decreasing magnitude in the main loop (*step #3*).

Both texts also do not discuss several key optimal algorithms required, such as "Comba" and Karatsuba multipliers and fast modular inversion, which we consider practical oversights. These optimal algorithms are vital to achieve any form of useful performance in non–trivial applications.

To solve this problem, the focus of this text is on the practical aspects of implementing a multiple precision integer package. As a case study, the "LibTomMath" package is used to demonstrate algorithms with real implementations that have been field tested and work very well. The LibTomMath library is freely available on the Internet for all uses, and this text discusses a very large portion of the inner workings of the library.

The algorithms presented will always include at least one "pseudo-code" description followed by the actual C source code that implements the algorithm. The pseudo-code can be used to implement the same algorithm in other programming languages as the reader sees fit.

This text shall also serve as a walk-through of the creation of multiple precision algorithms from scratch, showing the reader how the algorithms fit together and where to start on various taskings.

**1.3 Discussion and Notation **

** 1.3.1 Notation**

A multiple precision integer of n-digits shall be denoted as *x = (x _{n}*-1, ...,

*x*

_{1},

*x*

_{0})

_{ß}and represent the integer [MATHEMATICAL EXPRESSION NOT REPRODUCIBLE IN ASCII]. The elements of the array x are said to be the radix digits of the integer. For example,

*x*= (1, 2, 3)

_{10}would represent the integer 1 · 10

^{2}+ 2 · 10

^{1}+ 3 · 10

^{0}= 123.

The term "mp int" shall refer to a composite structure that contains the digits of the integer it represents, and auxiliary data required to manipulate the data. These additional members are discussed further in section 2.2.1. For the purposes of this text, a "multiple precision integer" and an "mp int" are assumed synonymous. When an algorithm is specified to accept an mp int variable, it is assumed the various auxiliary data members are present as well. An expression of the type *variablename.item* implies that it should evaluate to the member named "item" of the variable. For example, a string of characters may have a member "length" that would evaluate to the number of characters in the string. If the string *a* equals *hello*, then it follows that *a.length* = 5.

For certain discussions, more generic algorithms are presented to help the reader understand the final algorithm used to solve a given problem. When an algorithm is described as accepting an integer input, it is assumed the input is a plain integer with no additional multiple precision members. That is, algorithms that use integers as opposed to mp ints as inputs do not concern themselves with the housekeeping operations required such as memory management. These algorithms will be used to establish the relevant theory that will subsequently be used to describe a multiple precision algorithm to solve the same problem.

**1.3.2 Precision Notation**

The variable represents the radix of a single digit of a multiple precision integer and must be of the form *q ^{p}* for

*q, p*[member of] Z

^{+}. A single precision variable must be able to represent integers in the range 0 ≤

*x*<

*q*ß, while a double precision variable must be able to represent integers in the range 0 ≤

*x*<

*q*ß

^{2}. The extra radix-

*q*factor allows additions and subtractions to proceed without truncation of the carry. Since all modern computers are binary, it is assumed that

*q*is two.

Within the source code that will be presented for each algorithm, the data type mp digit will represent a single precision integer type, while the data type mp word will represent a double precision integer type. In several algorithms (notably the Comba routines), temporary results will be stored in arrays of double precision mp words. For the purposes of this text, *x _{j}* will refer to the j'th digit of a single precision array, and [??]

_{j}will refer to the

*j*'th digit of a double precision array. Whenever an expression is to be assigned to a double precision variable, it is assumed that all single precision variables are promoted to double precision during the evaluation. Expressions that are assigned to a single precision variable are truncated to fit within the precision of a single precision data type.

For example, if ß = 10^{2}, a single precision data type may represent a value in the range 0 ≤ *x* < 10^{3}, while a double precision data type may represent a value in the range 0 ≤ *x* < 10^{5}. Let *a* = 23 and *b* = 49 represent two single precision variables. The single precision product shall be written as *c <- a · b*, while the double precision product shall be written as [??] <- *a · b*. In this particular case, [??] = 1127 and *c* = 127. The most significant digit of the product would not fit in a single precision data type and as a result *c* ≠ [??].

**1.3.3 Algorithm Inputs and Outputs**

Within the algorithm descriptions all variables are assumed scalars of either single or double precision as indicated. The only exception to this rule is when variables have been indicated to be of type mp int. This distinction is important, as scalars are often used as array indicies and various other counters.

*(Continues...)*

Excerpted fromBigNum MathbyTom St DenisCopyright © 2006 by Syngress Publishing, Inc.. Excerpted by permission of Syngress. All rights reserved. No part of this excerpt may be reproduced or reprinted without permission in writing from the publisher.

Excerpts are provided by Dial-A-Book Inc. solely for the personal use of visitors to this web site.