C প্রোগ্রামিং ল্যাঙ্গুয়েজ এবং ANSI Standard সম্পর্কে ধারণা

Submitted by tushar pramanick on Mon, 02/25/2013 - 11:52

C and the ANSI Standard
For many years, the de facto standard for the C programming language was the K&R standard because of the book The C Programming Language, written by Brian Kernighan and Dennis Ritchie in 1978. However, there were many changes unofficially made to the C language that were not presented in the K&R standard.

Fearing that C might lose its portability, a group of compiler vendors and software developers petitioned the American National Standards Institute (ANSI) to build a standard for the C language in 1983. ANSI approved the application and formed the X3J11 Technical Committee to work on the C standard. By the end of 1989, the committee approved the ANSI standard for the C programming language.
The ANSI standard for C enhances the K&R standard and defines a group of commonly used C functions that are expected to be found in the ANSI C standard library. Now, all C compilers have the standard library, along with some other compiler-specific functions.

This book focuses on the C functions defined in the ANSI standard, which is supported by all compiler vendors. All programs in this book can be compiled by any compilers that support the ANSI standard. If you're interested in a specific compiler, you can learn the compiler-specific functions from the compiler's reference manual.

 

Comments

Related Items

The if-else Statement

The if-else Statement

The if statement

The if statement

If life were a straight line, it would be very boring. The same thing is true for programming. It would be too dull if the statements in your program could only be executed in the order in which they appear.

Mathematical Functions in C

Mathematical Functions in C

Basically, the math functions provided by the C language can be classified into three groups:

    Trigonometric and hyperbolic functions, such as acos(), cos(), and cosh().

Changing Data Sizes

Changing Data Sizes

Enabling or Disabling the Sign Bit

Enabling or Disabling the Sign Bit

As you know, it's very easy to express a negative number in decimal. All you need to do is put a minus sign in front of the absolute value of the number. But how does the computer represent a negative number in the binary format?