Whenever data are transmitted or stored, it should be possible to detect any changes to the data. In particular, stored programs must be protected against corruption, since a single altered bit of program code could ruin the program or modify its execution to such an extent that it no longer provides the required functions. The EEPROM memory used in smart cards is especially sensitive to external influences, such as heat and voltage fluctuations. Consequently, the sections that perform security-related functions must be protected so that undesired changes can be detected by the operating system and their negative effects can be avoided. Very sensitive file contents, such as program code, keys, access conditions, pointer structures and the like, must be protected against alteration. Error detection codes (EDCs) are used for this purpose. The probability of detecting a change in a memory region protected by an EDC depends on the type of code used. Error correction codes (ECCs) are an extension of error detection. They make it possible to not only detect errors in the protected data, but to also correct errors to a limited extent All of these codes work on the principle of assigning a checksum to the protected data. The checksum is usually stored along with the protected data. It is computed using a generally known algorithm, not a secret one. The data can be checked for changes as necessary by using the EDC. This is done by comparing the stored checksum with one computed anew. A particular aspect of error detection and correction is that it utilizes a wide variety of mathematical procedures. Some of these provide a higher degree of protection for the more significant bits, in order to reduce adverse effects on numerical values as much as possible. In most cases, however, using such algorithms enormously increases the complexity and size of the program code. It is thus more common to use procedures in which error detection does not distinguish between the upper and lower parts of a byte, but instead operates on the byte as a whole.

Error detection and correction codes are very similar to message authentication codes (MACs) and cryptographic checksums (CCSs). However, there is a fundamental difference. EDC and ECC checksums can be computed and checked by anyone. In contrast, the computation of a MAC or CCS requires a secret key, since these codes are designed to protect against manipulation of the data instead of against accidental corruption. The most widely known type of error detection code is doubtless the parity bit, which is appended to each byte in many data transmission protocols and some types of memory modules. Before computing a parity bit, you must decide whether to use even or odd parity. With even parity, the parity bit is chosen such that the total number of ‘1’ bits in the data byte plus the parity bit is an even number. With odd parity, this total is an odd number. If two bits in a byte are simultaneously wrong, the parity will not change and no error will be detected. Another drawback of parity-based error detection is the relatively large overhead of one parity bit for every eight data bits. This represents an additional memory load of 12.5 %. Furthermore, it is very difficult to work with supplementary parity bits when the memory is organized in bytes, since this requires significant program overhead. This is why parity bits are not used for error detection in smart card memories. Other methods, such as XOR and CRC checksums, are better suited to this task.

XOR checksums
AnXORchecksum, which is also known as a longitudinal redundancy check (LRC) due to how it is computed, can be obtained very simply and very quickly. These are both important criteria for error detection codes used in smart cards. In addition, the algorithm can be implemented extremely easily. Besides protecting data stored in memory,XORchecksums are typically used for data transmission (e.g., ATR with the T = 1 transmission protocol). An XOR checksum is computed by performing consecutive logical XOR operations on all data bytes. In other words, byte 1 is XOR-ed with byte 2, the result of this is XOR-ed with byte 3 and so on. If the checksum is placed directly after the data and a new checksum is computed using both the data and the first checksum, the result is’00′. This is the simplest way to verify that the data and the checksum still have their original values and thus are uncorrupted. The primary advantage of XOR checksums is that they can be computed quickly using a simple algorithm. The algorithm is so simple that its assembler code is only 10 to 20 bytes long. One reason for this is that the XOR operation is directly available in all processors as a machine instruction. In addition, an algorithm for XOR checksum computation must be implemented in almost every smart card operating system, due to the requirements of various ISO standards relating to data transmission using the T = 1 protocol. This algorithm can be used for other purposes without any additional overhead. Unfortunately, XOR checksums also suffer from several serious drawbacks, which considerably limit their practical application. They are in principle not very secure. For example, they do not allow the interchange of two bytes within the overall data to be detected. Also, multiple errors can occur at the same position in several bytes and cancel each other out. The consequence of all this is that XOR checksums are mainly used for data transmission; they are used only to a very limited extent to verify the consistency of memory contents. XOR checksums can be computed very quickly, since specific machine instructions for this purpose are generally available. With a typical 8-bit smart card microcontroller clocked at 3.5 MHz, a rate of 1 μs/byte can be achieved, which corresponds to a throughput of 1 MB/s.

CRC checksums
TheCRCmethod (cyclic redundancy check) also comes from the field of data communications, but it is significantly better than the XOR method. Still, a CRC checksum is also only an error detection code, so it cannot be used for error correction. The CRC method has been used for a long time in data transmission protocols, such as Xmodem, Zmodem and Kermit, and it is widely used in hard disk drive controllers in a hardware implementation. It is based on the CCITT V.41 recommendation. An additional standard for CRC checksums is ISO/IEC 13 329. A CRC-16 checksum is generated by a 16-bit cyclic feedback shift register, while a 32-bit shift register is used to generate a CRC-32 checksum. The following description refers only to the CRC-16 checksum, since this is the most commonly used CRC checksum method for smart cards. The feedback in the shift register is determined by a generating polynomial. In mathematical terms, the data to be checked are represented as a large number, which is divided by the generating polynomial. The remainder from this division is the checksum. The CRC-16 method (that is, a 16-bit CRC) should only be used with data volumes up to 4 kB, since the error detection probability drops sharply beyond this point. However, this restriction can easily be circumvented by dividing the data into blocks that are no larger than 4 kB. Alternatively, a CRC-32 method (32-bit CRC checksum) can be used, which allows single-bit errors to be detected in up to 4 GB of data. With a CRC checksum, it is always necessary to know the generating polynomial as well as the initial value for the shift register, since otherwise the computation cannot be reproduced. In the overwhelming majority of cases (e.g. ISO 3309), the initial value for the shift register is zero, but several data transmission protocols (such as CCITT Recommendation X.25) set all bits to 1.

The computation of a CRC checksum, as illustrated in Figure 4.14, proceeds as follows: (a) the 16-bit CRC register is set to its initial value; (b) the data bits are fed into the feedback shift register one after the other, starting with the least-significant bit; and (c) the feedback (which represents the polynomial division) takes place via bitwise logical XOR operations on A CRC checksum can be verified by again calculating the CRC checksum of the data and comparing the result with the checksum provided with the data. If they are the same, it follows that the data and the checksum have not been altered. The major advantage of CRC checksums is that they provide reliable error detection, even with multiple errors. Only very few methods can achieve this. In addition, in contrast to the XOR method, CRC allows interchanged data bytes to be detected, since byte order definitely plays a role in checksum generation via the feedback shift register. However, it is very difficult to specify exact detection probabilities for such errors, since they are very dependent on the locations of the errors within the bytes in question. The CRC algorithm is relatively simple, and the amount of code needed to implement it thus matches the needs of small smart card memories. Its greatest drawback is the slowness of the computation, since the algorithm requires the data to be shifted bit by bit. The CRC checksum algorithmwas originally designed for hardware implementation, and this has a strong detrimental effect when it is implemented in software. The throughput of a CRC-16 routine is lower than that of an XOR checksum routine by a factor of around 200. A typical figure is 0.2 ms/byte at a 3.5-MHz clock frequency, which corresponds to a throughput of 5000 byte/s. Computing aCRCchecksum for a 10-kB smart cardROMwould thus require around 2 seconds. Many types of microcontrollers have a special component for hardware-assisted generation of CRC checksums for definable memory regions. The rates that can thereby be achieved can be as high as one byte per clock cycle. For a microcontroller clocked at 5 MHz without an internal clock divider, this would yield a throughput of 5 MB/s.