How to implement computer arithmetic?

The computer is a generic computing means to achieve a CPU, a CPU inside the ALU

Wiki description

  An arithmetic logic unit ( English: Arithmetic Logic Unit, the ALU ) is the central processor of an execution unit, a core component of all central processors, the AND gate and OR gate arithmetic logic units, the main function is binary in the arithmetic operation , such as by subtraction (not including the integer division). Essentially, all modern CPU architectures, are in binary complement form to represent.

Here are a few questions:

  1. implemented without the use of arithmetic adder

  2. Do not use arithmetic to realize subtraction

  3. Save multiplication and division implemented without using multiplication

If you know the answer, there is no need Kanla down, on the contrary continue reading

  Said first adder

    

Code

int _add(int v1,int v2) {
  int nRet = 0;
  do {
    nRet = v1 ^ v2;
    v2 =(v1 & v2) << 1;
    v1 = nRet;
  } while (v2);
  return nRet;
}

 Manual testing

  5+5=?

    101      101

xor    101  and 101

------------------------------------------

      000      101

 

    0000       0000

xor    1010  and 1010

------------------------------------------

     1010       0000

The results are: conversion bit binary 1010 (2 ^ 0 * 0) + (1 * 2 ^ 1) + (0 * 2 ^ 2) + (1 * 2 ^ 3) = 10

Ye who were said subtraction

  Addition and subtraction same reason, as to why you need to know complement (complement design Niubi I think nobody should refute)

_bitSub int (int A, int B) { 
	B = -b; 
	do { 
		int nXorResult = A ^ B; 
		B = (A & B) <<. 1; // B is determined to exit the loop means 
		a = nXorResult; // a is results 
	} the while (B);	 
	return A; 
}

Manual testing (originally wanted to be lazy, do not continue to demonstrate ......)

   1-1 (invert the sign of the subtrahend and then using an addition operation mode)

       0001    0001

xor  1111  and  1111

-------------------------------------

      1110    0001

 

       1110        1110

xor  0010  and  0010

--------------------------------------

       1100     0010

    

       1100        1100

xor  0100  and  0100

--------------------------------------

      1000      0100

 

       1000        1000

xor  1000  and   1000

--------------------------------------

       0000      1000

 

       00000         00000

xor  10000  and   10000

--------------------------------------

      10000                 00000

 It should be noted, ye have done a four bit computing, direct cut off the excess, so the result is a binary 0000 

  1--1 (invert the sign of the subtrahend and then using an addition operation mode)

    0001    0001

xor    0001    0001
-------------------------------------

    0000    0001  

 

    0000    0000

xor    0010    0010
-------------------------------------

    0010    0000

Results: binary to decimal 0010 is 2  

Open proved correct

 

Achieve multiplication first consider the question

Decimal: 100 * 10 = seconds you should know the answer, 100 * 100 it? 100 * 100 = 10000

Note: The suffix b represents the binary digit

Binary: 100b * 100b = Do you know how much it equal? In fact, here and decimal is the same reason 100b * 100b = 10000b (through this is actually the result of two multiplicand shift to the left, 2 = 100 100 << 00 )

 

int _mul(int v1,unsigned int v2) {
    int nRet = 0;
    int nLeftMove = 0;
    do {
        if (v2 & 1) {
            nRet += v1 << nLeftMove;
        }
        v2 = v2>>1;
        nLeftMove++;
    }
    while(v2); 
    return nRet;
}

  5*2=101b*10b

First: Analyzing 1 0 B 1 is not 0, the condition is not satisfied is not moved multiplicand

Second: Analyzing 1 0B 1 is 1, a multiplicand move condition is satisfied, 101b << 1b = 1010b

Third: the multiplier has been judged over, ending

 

Guess you like

Origin www.cnblogs.com/binaryAnt/p/11104901.html