Chapter 2 – Combinational Logic Circuits

Commenti

Transcript

Chapter 2 – Combinational Logic Circuits
Logic and Computer Design Fundamentals
Chapter 2 – Combinational
Logic Circuits
Part 2 – Circuit Optimization
Charles Kime & Thomas Kaminski
© 2008 Pearson Education, Inc.
(Hyperlinks are active in View Show mode)
Overview
 Part 1 – Gate Circuits and Boolean Equations
• Binary Logic and Gates
• Boolean Algebra
• Standard Forms
 Part 2 – Circuit Optimization
•
•
•
•
Two-Level Optimization
Map Manipulation
Practical Optimization (Espresso)
Multi-Level Circuit Optimization
 Part 3 – Additional Gates and Circuits
• Other Gate Types
• Exclusive-OR Operator and Gates
• High-Impedance Outputs
Chapter 2 - Part 2
2
Circuit Optimization
 Goal: To obtain the simplest
implementation for a given function
 Optimization is a more formal approach
to simplification that is performed using
a specific procedure or algorithm
 Optimization requires a cost criterion to
measure the simplicity of a circuit
 Distinct cost criteria we will use:
• Literal cost (L)
• Gate input cost (G)
• Gate input cost with NOTs (GN)
Chapter 2 - Part 2
3
Literal Cost
 Literal – a variable or it complement
 Literal cost – the number of literal
appearances in a Boolean expression
corresponding to the logic circuit
diagram
 Examples:
•
•
•
•
F = BD + A B C + AC D
L=8
F = BD + A BC + A B D + AB C
L=
F = (A + B)(A + D)(B + C + D)(B + C + D) L =
Which solution is best?
Chapter 2 - Part 2
4
Gate Input Cost
 Gate input costs - the number of inputs to the gates in the
implementation corresponding exactly to the given equation
or equations. (G - inverters not counted, GN - inverters counted)
 For SOP and POS equations, it can be found from the
equation(s) by finding the sum of:
• all literal appearances
• the number of terms excluding single literal terms,(G) and
• optionally, the number of distinct complemented single literals (GN).
 Example:
• F = BD + A B C + AC D
G = 8, GN = 11
• F = BD + AB C + A B D + AB C
G = , GN =
• F = (A + B)(A + D)(B + C + D)( B + C + D) G = , GN =
• Which solution is best?
Chapter 2 - Part 2
5
Cost Criteria (continued)
GN = G + 2 = 9
L=5
 F=A+BC+BC
G=L+2= 7
 Example 1:
B
C
A
F
 L (literal count) counts the AND inputs and the single
literal OR input.
 G (gate input count) adds the remaining OR gate inputs
 GN(gate input count with NOTs) adds the inverter inputs
Chapter 2 - Part 2
6
Cost Criteria (continued)
 Example 2:




A
B
C
F = A B C + AB C
L = 6 G = 8 GN = 11
F = (A + C)( B + C)( A + B)
L = 6 G = 9 GN = 12
F
 Same function and same
A
literal cost
B
 But first circuit has better C
gate input count and better
gate input count with NOTs
 Select it!
F
Chapter 2 - Part 2
7
Boolean Function Optimization
 Minimizing the gate input (or literal) cost of a (a
set of) Boolean equation(s) reduces circuit cost.
 We choose gate input cost.
 Boolean Algebra and graphical techniques are
tools to minimize cost criteria values.
 Some important questions:
• When do we stop trying to reduce the cost?
• Do we know when we have a minimum cost?
 Treat optimum or near-optimum cost functions
for two-level (SOP and POS) circuits first.
 Introduce a graphical technique using Karnaugh
maps (K-maps, for short)
Chapter 2 - Part 2
8
Karnaugh Maps (K-map)
 A K-map is a collection of squares
• Each square represents a minterm
• The collection of squares is a graphical representation
•
•
of a Boolean function
Adjacent squares differ in the value of one variable
Alternative algebraic expressions for the same function
are derived by recognizing patterns of squares
 The K-map can be viewed as
• A reorganized version of the truth table
• A topologically-warped Venn diagram as used to
visualize sets in algebra of sets
Chapter 2 - Part 2
9
Some Uses of K-Maps
 Provide a means for:
• Finding optimum or near optimum
 SOP and POS standard forms, and
 two-level AND/OR and OR/AND circuit
implementations
for functions with small numbers of
variables
• Visualizing concepts related to manipulating
Boolean expressions, and
• Demonstrating concepts used by computeraided design programs to simplify large
circuits
Chapter 2 - Part 2
10
Two Variable Maps
 A 2-variable Karnaugh Map:
y=0 y=1
• Note that minterm m0 and
minterm m1 are “adjacent” x = 0 m0 = m1 =
xy
xy
and differ in the value of the
variable y
x = 1 m2 = m3 =
xy
x
y
• Similarly, minterm m0 and
minterm m2 differ in the x variable.
• Also, m1 and m3 differ in the x variable as
well.
• Finally, m2 and m3 differ in the value of the
variable y
Chapter 2 - Part 2
11
K-Map and Truth Tables
 The K-Map is just a different form of the truth table.
 Example – Two variable function:
• We choose a,b,c and d from the set {0,1} to
implement a particular function, F(x,y).
Function Table
Input
Values
(x,y)
Function
Value
F(x,y)
00
01
10
11
a
b
c
d
K-Map
y=0
x=0 a
x=1 c
y=1
b
d
Chapter 2 - Part 2
12
K-Map Function Representation
 Example: F(x,y) = x

F=x y=0y=1
x=0
0
0
x=1
1
1
For function F(x,y), the two adjacent cells
containing 1’s can be combined using the
Minimization Theorem:
F( x, y ) = x y + x y = x
Chapter 2 - Part 2
13
K-Map Function Representation
 Example: G(x,y) = x + y
G = x+y y = 0 y = 1
x=0
0
1
x=1
1
1
 For G(x,y), two pairs of adjacent cells containing
1’s can be combined using the Minimization
Theorem:
G( x, y ) = (x y + x y )+ (xy + x y )= x + y
Duplicate x y
Chapter 2 - Part 2
14
Three Variable Maps
 A three-variable K-map:
yz=00
yz=01
yz=11
yz=10
x=0
m0
m1
m3
m2
x=1
m4
m5
m7
m6
 Where each minterm corresponds to the product
terms:
yz=00 yz=01 yz=11
x=0 x y z
xyz
xyz
xyz
x=1
xyz
xyz
yz=10
xyz
xyz
 Note that if the binary value for an index differs in one
bit position, the minterms are adjacent on the K-Map
Chapter 2 - Part 2
15
Alternative Map Labeling
 Alternate labelings are useful:
y
y
x
x
x
0
1
3
2
4
5
7
6
z
z
00 01 11 10
0 0
z
y
yz
x 1
4
1
3
2
5
7
6
z
Chapter 2 - Part 2
16
Example Functions
 By convention, we represent the minterms of F by a "1"
in the map and leave the minterms of F blank
 Example:
y
F(x, y, z) = !m(2,3,4,5)
 Example:
G(a,
c) = !m(3,4,6,7)
Learn
theb,
locations
of the 8
0
1
x 41
indices based on the variable
order shown (x, most significantx
and z, least significant) on the
map boundaries
5
3
21
7
6
1
1
z
0
4
1
1
y
3
1
7
1
5
2
6
1
z
Chapter 2 - Part 2
17
Combining Squares

By combining squares, we reduce number of
literals in a product term, reducing the literal cost,
thereby reducing the other two cost criteria

On a 3-variable K-Map:
• One square represents a minterm with three
variables
• Two adjacent squares represent a product term
with two variables
• Four “adjacent” terms represent a product term
with one variable
• Eight “adjacent” terms is the function of all ones (no
variables) = 1.
Chapter 2 - Part 2
18
Example: Combining Squares
 Example: Let
y
F = !m(2,3,6,7)
x
0
1
4
5
3
1
7
1
2
1
6
1
z
 Applying the Minimization Theorem three
times:
F( x, y , z ) = x y z + x y z + x y z + x y z
= yz + y z
=y
 Thus the four terms that form a 2 × 2 square
correspond to the term "y".
Chapter 2 - Part 2
19
Three-Variable Maps
 Reduced literal product terms for SOP standard
forms correspond to rectangles on K-maps
containing cell counts that are powers of 2.
 Rectangles of 2 cells represent 2 adjacent
minterms; of 4 cells represent 4 minterms that
form a “pairwise adjacent” ring.
 Rectangles can contain non-adjacent cells as
illustrated by the “pairwise adjacent” ring
above.
Chapter 2 - Part 2
20
Three-Variable Maps
 Topological warps of 3-variable K-maps
that show all adjacencies:
 Venn Diagram
0

Cylinder
4 X
6 7 5
Y 3 Z
1
2
Chapter 2 - Part 2
21
Three-Variable Maps
 Example Shapes of 2-cell Rectangles:
y
XY
x
0
1
3
2
4
5
7
6
z
XZ
YZ
 Read off the product terms for the
rectangles shown
Chapter 2 - Part 2
22
Three-Variable Maps
 Example Shapes of 4-cell Rectangles:
y
xx
0
1
3
2
4
5
7
6
z
 Read off the product terms for the
rectangles shown
Chapter 2 - Part 2
23
Three Variable Maps
 K-Maps can be used to simplify Boolean functions by
systematic methods. Terms are selected to cover the
“1s”in the map.
 Example: Simplify F(x, y, z) = !m(1,2,3,5,7)
xy
z
y
1 1 1
x
1 1
z
F(x, y, z) =
z+xy
Chapter 2 - Part 2
24
Three-Variable Map Simplification
 Use a K-map to find an optimum SOP
equation for F(X, Y, Z) = !m(0,1,2,4,6,7)
xy
y
y
1 1
1
x 1
xy
1 1
z
F(X,Y,Z) = Y + XY + X Y
Chapter 2 - Part 2
25
Four Variable Maps
 Map and location of minterms:
Y
Variable Order
W
0
1
3
2
4
5
7
6
12
13
15
14
8
9
11
10
X
Z
Chapter 2 - Part 2
26
Four Variable Terms
 Four variable maps can have rectangles
corresponding to:
• A single 1 = 4 variables, (i.e. Minterm)
• Two 1s = 3 variables,
• Four 1s = 2 variables
• Eight 1s = 1 variable,
• Sixteen 1s = zero variables (i.e.
Constant "1")
Chapter 2 - Part 2
27
Four-Variable Maps
 Example Shapes of Rectangles:
Y
XZ
XZ
W
0
1
3
2
4
5
7
6
12
13
15
14
8
9
11
10
WY
X
Z
Chapter 2 - Part 2
28
Four-Variable Maps
 Example Shapes of Rectangles:
Y
W
0
1
3
2
4
5
7
6
12
13
15
14
8
9
11
10
X
Z
Chapter 2 - Part 2
29
Four-Variable Map Simplification
 F(W, X, Y, Z) = Σm(0, 2,4,5,6,7,8,10,13,15)
Y
1
1
1
W
1
1
1
1
1
1
X
1
Z
F(W,X,Y,Z)=XZ+WX+XZ
Chapter 2 - Part 2
30
Four-Variable Map Simplification
 F(W, X, Y, Z) = Σm(3,4,5,7,9,13,14,15)
Y
1
1
1
1
1
1
X
1
W
1
Z
F(W,X,Y,Z)=XZ+WXY+WZY+WXYZ+WXY
Chapter 2 - Part 2
31
Systematic Simplification
 A Prime Implicant is a product term obtained by combining
the maximum possible number of adjacent squares in the map
into a rectangle with the number of squares a power of 2.
 A prime implicant is called an Essential Prime Implicant if it is
the only prime implicant that covers (includes) one or more
minterms.
 Prime Implicants and Essential Prime Implicants can be
determined by inspection of a K-Map.
 A set of prime implicants "covers all minterms" if, for each
minterm of the function, at least one prime implicant in the
set of prime implicants includes the minterm.
Chapter 2 - Part 2
32
Example of Prime Implicants
 Find ALL Prime Implicants
CD
C
BD
1
1
BD
A
AB
1
1
1
1
1
1
1
D
AD
ESSENTIAL Prime Implicants
C
BD
1
1
1
B
BD
A
1
1
1
1
1
1
1
1
1
B
1
D
BC
Minterms covered by single prime implicant
Chapter 2 - Part 2
33
Prime Implicant Practice
 Find all prime implicants for:
F(A, B, C, D) = !m(0,2,3,8,9,10,11,12,13,14,15)
C
1
1
1
1
B
1
1
1
1
1
1
1
1
A
D
Chapter 2 - Part 2
34
Another Example
 Find all prime implicants for:
G(A, B, C, D) = !m(0,2,3,4,7,12,13,14,15)
C
1
1
1
1
1
1
1
1
B
1
A
D
Chapter 2 - Part 2
35
Five Variable or More K-Maps
 For five variable problems, we use two adjacent Kmaps. It becomes harder to visualize adjacent
minterms for selecting PIs. You can extend the
problem to six variables by using four K-Maps.
V=1
V=0
Y
Y
X
X
W
W
Z
Z
Chapter 2 - Part 2
36
Five Variable or More K-Maps
V=0
Y
V=1
Y
1
1
X
X
W
W
Z
Z
F(W,X,Y,Z)=VWXYZ+VWXYZ=WXYZ
Chapter 2 - Part 2
37
Don't Cares in K-Maps
 Sometimes a function table or map contains entries for
which it is known:
• the input values for the minterm will never occur, or
• The output value for the minterm is not used
 In these cases, the output value need not be defined
 Instead, the output value is defined as a “don't care”
 By placing “don't cares” ( an “x” entry) in the function table
or map, the cost of the logic circuit may be lowered.
 Example 1: A logic function having the binary codes for the
BCD digits as its inputs. Only the codes for 0 through 9 are
used. The six codes, 1010 through 1111 never occur, so the
output values for these codes are “x” to represent “don’t
cares.”
Chapter 2 - Part 2
38
Don't Cares in K-Maps
 Example 2: A circuit that represents a very common situation that
occurs in computer design has two distinct sets of input variables:
• A, B, and C which take on all possible combinations, and
• Y which takes on values 0 or 1.
and a single output Z. The circuit that receives the output Z
observes it only for combinations of A, B, and C such A = 1 and B
= 1 or C = 0, otherwise ignoring it. Thus, Z is specified only for
those combinations, and for all other combinations of A, B, and C,
Z is a don’t care. Specifically, Z must be specified for AB + C = 1,
and is a don’t care for :
AB + C = (A + B)C = AC + BC = 1
 Ultimately, each don’t care “x” entry may take on either a 0 or 1
value in resulting solutions
 For example, an “x” may take on value “0” in an SOP solution and
value “1” in a POS solution, or vice-versa.
 Any minterm with value “x” need not be covered by a prime
implicant.
Chapter 2 - Part 2
39
Example: BCD “5 or More”
0
w
 The map below gives a function F1(w,x,y,z) which
is defined as "5 or more" over BCD inputs. With
the don't cares used for the 6 non-BCD
combinations:
y
F1 (w,x,y,z) = w + x z + x y G = 7
0
0
0
 This is much lower in cost than F2 where
0 1 1 1
the “don't cares” were treated as "0s."
4
5
7
6
x
= w x z + w x y + w x y G = 12
X X X X
F
(w,
x,
y,
z)
2
12
13
15
14
 For this particular function, cost G for the
1 1 X X
8
9
11
10
POS solution for F1(w,x,y,z) is not changed
z
by using the don't cares.
0
1
3
2
Chapter 2 - Part 2
40
Product of Sums Example
 Find the optimum POS solution:
F(A, B, C, D) = !m(3,9,11,12,13,14,15) +
C
!d (1,4,6)
x
1
x
x
1
A
Fa = AB + B D + BD
1
1
1
1
D
B
1
Fb = AB + B D
Chapter 2 - Part 2
41
Optimization Algorithm
 Find all prime implicants.
 Include all essential prime implicants in the
solution
 Select a minimum cost set of non-essential
prime implicants to cover all minterms not yet
covered:
• Obtaining an optimum solution: See Reading
Supplement - More on Optimization
• Obtaining a good simplified solution: Use the
Selection Rule
Chapter 2 - Part 2
42
Prime Implicant Selection Rule
 Minimize the overlap among prime
implicants as much as possible. In
particular, in the final solution, make
sure that each prime implicant selected
includes at least one minterm not
included in any other prime implicant
selected.
Chapter 2 - Part 2
43
Selection Rule Example
 Simplify F(A, B, C, D) given on the KSelected Essential
map.
C
1
1
1
1
1
1
A
1
C
1
1
1
B
1
D
1
1
1
1
A
1
1
1
B
1
D
Minterms covered by essential prime implicants
Chapter 2 - Part 2
44
Selection Rule Example with Don't Cares
 Simplify F(A, B, C, D) given on the K-map.
C
1
A
x
1
x
x
1
x
1
1
D
Essential
Selected
x
C
1
B
A
x
1
x
x
1
1
x
x
1
B
D
Minterms covered by essential prime implicants
Chapter 2 - Part 2
45
Practical Optimization
 Problem: Automated optimization
algorithms:
• require minterms as starting point,
• require determination of all prime
implicants, and/or
• require a selection process with a potentially
very large number of candidate solutions to
be found.
 Solution: Suboptimum algorithms not
requiring any of the above in the general
case
Chapter 2 - Part 2
46
Cubical Notation
X1 X 2 X 3 X 4 + X1 X 2 X 3 X 4 = X1 X 3 X 4
!
(1010)c
+
(1110)c
=
(1-10)c
Chapter 2 - Part 2
47
Tabular Algorithm(Quine-McCluskey)
www.writphotec.com/mano4/Supplements/More_Optimization_supp4.pdf
Chapter 2 - Part 2
48
Tabular Algorithm(Quine-McCluskey)
X3
1
X1
1
1
d
1
1
1
1
1
X2
1
X4
Chapter 2 - Part 2
49
Tabular Algorithm(Quine-McCluskey)
Chapter 2 - Part 2
50
Tabular Algorithm(Quine-McCluskey)
Chapter 2 - Part 2
51
Tabular Algorithm(Quine-McCluskey)
y
1
x
1
1
1
1
1
z
Chapter 2 - Part 2
52
Tabular Algorithm(Quine-McCluskey)
Chapter 2 - Part 2
53
Tabular Algorithm(Quine-McCluskey)
Chapter 2 - Part 2
54
Esempio
 F=∑( 1,3,6,7,8,9,12,13)
Chapter 2 - Part 2
55
Esempio
Chapter 2 - Part 2
56
Esempio
Dominanza di RIGHE: se un implicante pi copre tutti i mintermini
di pj più almeno uno, si dice che pi domina pj. In tal caso
l’implicante pj puo’ essere eliminato
Dominanza di COLONNE: se ogni volta che un implicante copre
la colonna ci copre anche la colonna cj ma non viceversa, si dice
che ci domina cj. In tal caso la colonna ci puo’ essere eliminata
f=p1+p4+p5
Chapter 2 - Part 2
57
Esempio con indeterminazioni
Chapter 2 - Part 2
58
Esempio con indeterminazioni
Chapter 2 - Part 2
59
Esempio con indeterminazioni
 f1=p1+p3
Chapter 2 - Part 2
60
Reti combinatorie a più uscite
F1=X’Z+YZ
L=4; G=6; GN=7
F2=XY+YZ
L=4; G=6; GN=6
Implementazione comune L =6; G=10; GN=11
Chapter 2 - Part 2
61
Reti combinatorie a più uscite
f1= X’Z+YZ
f2= XY+YZ’
f1= X’Z+XYZ
f2= YZ’+XYZ
L=4; G=6; GN=7
L=4; G=6; GN=7
Implementazione comune
G=11; GN=13
Chapter 2 - Part 2
62
Metodo di Quine e McCluskey per reti
combinatorie a più uscite
Chapter 2 - Part 2
63
Reti combinatorie a più uscite
Chapter 2 - Part 2
64
Reti combinatorie a più uscite
Chapter 2 - Part 2
65
Reti combinatorie a più uscite
Primo implicante di più
uscite
p6 è Primo implicante
essenziale per f1 non
per f2
Chapter 2 - Part 2
66
Reti combinatorie a più uscite
Chapter 2 - Part 2
67
Reti combinatorie a più uscite
Chapter 2 - Part 2
68
Reti combinatorie a più uscite
Chapter 2 - Part 2
69
Reti combinatorie a più uscite
f1=p6+p1 +p2 +p5; f3=p1 +p5 +p10
f2=p2+p12
Chapter 2 - Part 2
70
Metodi di Semplificazione
 Metodo Classico:
• Individuare i mintermini
• Individuare i primi implicanti
• Selezionare opportunamente i primi implicanti
 Metodi Pragmatici:
• Non dipendono dal numero di mintermini
• Non generano tutti i primi implicanti
• Non richiedono di generare tutti i primi
implicanti alternativi
Chapter 2 - Part 2
71
Espresso(1)
 Init: Inserire la funzione F e la copertura iniziale, valutare il costo
iniziale (G)
Ciclo 1:
Ciclo 2:
OUT:
QUIT:










Esegui EXPAND
[Solo per il primo passo esegui ESSENTIAL_PRIMES]
Esegui IRREDUNTANT_COVER
Calcola Costo, se il costo non è migliorato, vai a OUT
Esegui REDUCE
Vai a Ciclo 1
Esegui LAST_GASP
Se il costo non migliora vai a QUIT
Vai a Ciclo 2
Aggiungi agli implicanti trovati i primi implicanti essenziali
Chapter 2 - Part 2
72
Espresso(2)
 EXPAND:
• Estrae da ogni implicante della funzione un primo implicante.
• Gli implicanti sono ordinati da quelli piu’ grandi (meno letterali) a quelli
più piccoli (più letterali)
• Tra le possibili espansioni si scelgono quelli che coprono più implicanti e
hanno dimensioni maggiori
 ESSENTIAL_PRIMES:
• Analizza ogni primo implicante per determinare se è un primo implicante
essenziale
• Un Primo Implicante è valutato come essenziale se ha almeno un
mintermine che è circondato da mintermini dello stesso implicante o 0 in
tutte le n direzioni ( dove n è il numero di variabili della funzione)
• I primi implicanti essenziali vengono rimossi dalla soluzione e introdotti
nel passaggio finale
• I mintermini rimossi vengono sostituiti con non-specificazioni (don’t
care)
Chapter 2 - Part 2
73
Espresso(3)
 IRREDUNTANT_COVER:
• Rimuove gli implicanti che sono ridondanti (e.g. quelli che coprono
soltanto non-specificazioni) senza lasciare implicanti non coperti
 REDUCE:
• Viene utilizzata per superare i minimi locali
• Ogni implicante viene ridotto all’implicante più piccolo che garantisce la
copertura della funzione
• La riduzione di un implicante coinvolge i successivi
• Gli implicanti vengono ordinati considerando l’implicante maggiore e
successivamente gli implicanti che hanno il minor numero di posizioni
differenti dai precedenti
Chapter 2 - Part 2
74
Espresso(4)
 LAST_GASP:
• Applica REDUCE ad ogni singolo implicante (uno alla volta) ottenendo
il più piccolo implicante che copre i mintermini dell’implicante di
partenza
• Si applica EXPAND alla copertura generata al passo precedente
selezionando i primi implicanti che coprono almeno due degli implicanti
ottenuti.
• Si combina la copertura ottenuta con quella fornita in ingresso alla
procedure LAST_GASP e si applica IRREDUNTANT_COVER
 QUIT:
•
Aggiunge agli implicanti determinati nel processo i primi implicanti essenziali,
valuta il costo
Chapter 2 - Part 2
75
Esempio(1)
F(A,B,C,D) = A D + A BD + B CD + AB C D
!
L=12; G=16;
Chapter 2 - Part 2
76
Esempio(2)
F = (A B + A D + AB D) + B CD L=10; G=14
Chapter 2 - Part 2
77
Esempio (3)
Chapter 2 - Part 2
78
Esempio (4)
F = (A B + A D + AB D) + A C
L=9;G=13;
Chapter 2 - Part 2
79
Example Algorithm: Espresso
 Illustration on a K-map:
C
C
1
1
1
1
1
X
1
X
X 1
1
X
1
1
1
1
1
1
A
1
1
1
D
B
A
1
1
1
B
D
Original F & EXPAND ESSENTIAL & IRREDUNDANT
COVER
Chapter 2 - Part 2
80
Example Algorithm: Espresso
 Continued:
C
C
A
X
X X
X
X
X X
X
1
1
1
1
1
1
1
1
1
D
REDUCE
B
A
1
1
1
B
D
EXPAND
Chapter 2 - Part 2
81
Example Algorithm: Espresso
 Continued:
C
A
C
X
X X
X
1
1
1
1
1
1
1
1
D
B
IRREDUNDANT COVER
A
1
1
1
1
1
1
1
1
B
D
After REDUCE, EXPAND,
IRREDUNDANT COVER,
LAST GASP, QUIT
Chapter 2 - Part 2
82
Example Algorithm: Espresso
 This solution costs 2 + 2 + 3 + 3 + 4 = 14
 Finding the optimum solution and comparing:
C
Essential
1
1
1
1
1
1
1
Selected
A
1
1
1
B
Minterms covered by essential prime implicants
D
 There are two optimum solutions one of which is the
one obtained by Espresso.
Chapter 2 - Part 2
83
Multiple-Level Optimization
 Multiple-level circuits - circuits that are
not two-level (with or without input
and/or output inverters)
 Multiple-level circuits can have reduced
gate input cost compared to two-level
(SOP and POS) circuits
 Multiple-level optimization is performed
by applying transformations to circuits
represented by equations while
evaluating cost
Chapter 2 - Part 2
84
Transformations
 Factoring - finding a factored form from
SOP or POS expression
• Algebraic - No use of axioms specific to
Boolean algebra such as complements or
idempotence
• Boolean - Uses axioms unique to Boolean
algebra
 Decomposition - expression of a function
as a set of new functions
Chapter 2 - Part 2
85
Transformations (continued)
 Substitution of G into F - expression
function F as a function of G and some or
all of its original variables
 Elimination - Inverse of substitution
 Extraction - decomposition applied to
multiple functions simultaneously
Chapter 2 - Part 2
86
Transformation Examples
 Algebraic Factoring
F = A C D + A B C + ABC + AC D
G = 16
• Factoring:
F = A (C D + BC) + A (BC + C D ) G = 16
• Factoring again:
F = A C ( B + D ) + AC (B + D )
G = 12
• Factoring again:
F = (A C + AC) (B + D)
G = 10
Chapter 2 - Part 2
87
Transformation Examples
 Decomposition
• The terms B + D and AC + AC can be defined
as new functions E and H respectively,
decomposing F:
F = E H, E = B + D , and H = AC + AC G = 10
 This series of transformations has reduced G from
16 to 10, a substantial savings. The resulting
circuit has three levels plus input inverters.
Chapter 2 - Part 2
88
Transformation Examples
 Substitution of E into F
• Returning to F just before the final factoring step:
F = A C ( B + D ) + AC (B + D )
G = 12
• Defining E = B + D, and substituting in F:
F = A C E + ACE
G = 10
• This substitution has resulted in the same cost as the
decomposition
Chapter 2 - Part 2
89
Transformation Examples
 Elimination
• Beginning with a new set of functions:
X=B+C
Y=A+B
Z = AX + C Y
G = 10
• Eliminating X and Y from Z:
Z = A (B + C) + C (A + B)
G = 10
• “Flattening” (Converting to SOP expression):
Z = A B + A C + AC + BC
G = 12
• This has increased the cost, but has provided an new
SOP expression for two-level optimization.
Chapter 2 - Part 2
90
Transformation Examples
 Two-level Optimization
• The result of 2-level optimization is:
Z=AB+ C
G=4
 This example illustrates that:
• Optimization can begin with any set of equations,
not just with minterms or a truth table
• Increasing gate input count G temporarily during a
series of transformations can result in a final
solution with a smaller G
Chapter 2 - Part 2
91
Transformation Examples
 Extraction
• Beginning with two functions:
E = A B D + A BD
H = B C D + BCD
G = 16
• Finding a common factor and defining it as a
function:
F = B D + BD
• We perform extraction by expressing E and H as
the three functions:
F = BD + BD, E = A F, H = CF
G = 10
• The reduced cost G results from the sharing of logic
between the two output functions
Chapter 2 - Part 2
92
Circuiti Multilivello
 G=ABC+ABD+E+ACF+ADF
(L=13;G=17;GN=17)
G=AB(C+D)+E+AF(C+D)
(L=9;G=13;GN=13)
Chapter 2 - Part 2
93
Circuiti Multilivello
 G=AB(C+D)+AF(C+D)+E
 (
;G=11;GN=11)
 G=A(C+D)(B+F)+E
 (L=6;G=9;GN=9)
Chapter 2 - Part 2
94
Semplificazione di Circuiti Multilivello
 Fattorizzazione : si individuano i fattori comuni
 Decomposizione : si esprimono le funzioni tramite
nuove funzioni
 Estrazione : si esprimono funzioni multiple
tramite nuove funzioni
 Sostituzione: si esprime una funzione F in
relazione della funzione G e alcune o tutte le
variabili di F
 Eliminazione: l’espressione di F in funzione di G
viene sostituita dall’espressione relativa(inverso
della sostituzione, è detta flattening o collapsing)
Chapter 2 - Part 2
95
RTL Logic
NOT
NAND
NOR
Chapter 2 - Part 2
96
Completezza della porta NAND
Chapter 2 - Part 2
97
Completezza della porta NOR
Chapter 2 - Part 2
98
Implementazione con porte NAND
F(X,Y,Z) = " m(1,2,3,4,5,7)
X
!
F(X,Y,Z) = Z + XY + XY
Y
1
1
1
1
1
1
Z
F(X,Y,Z) = Z • XY • XY
!
!
F(X,Y,Z) = Z • XY • XY
Chapter 2 - Part 2
99
Implementazione con porte NAND
F(A,B) = A + B
F(A,B) = A " B
!
!
F(A,B) = A " B = A # B
F(A,B,C) = A + AB C + ABC + A B C + ABC
F(A,B,C) = A " AB C " ABC " A B C " ABC
F(A,B,C) = A " AB C " ABC " A B C " ABC
F(A,B,C) = A # AB C # ABC # A B C # ABC
F(A,B,C) = A # (A# B # C)# (A# B# C )# (A # B # C)# (A# B# C)
!
Chapter 2 - Part 2
100
Implementazione con porte NOR
F(A,B) = AB
F(A,B) = A + B
!
!
F(A,B) = A + B = A " B
F(A,B,C) = A + AB C + ABC + A B C + ABC
F(A,B,C) = A + (A + B + C ) + (A + B + C) + (A + B + C ) + (A + B + C )
F(A,B,C) = A + (A " B" C ) + (A " B " C) + (A" B" C ) + (A " B " C )
F(A,B,C) = A" (A " B" C )" (A " B " C)" (A" B" C )" (A " B " C )
!
Chapter 2 - Part 2
101
Codice di Gray
Da codice binario Bn-1Bn-2…B0 a codice di Gray
Gn-1Gn-2…G0
Decimale
Binario
Gray
0
0000
0000
1
0001
0001
2
0010
0011
3
0011
0010
4
0100
0110
5
0101
0111
6
0110
0101
7
0111
0100
8
1000
1100
9
1001
1101
10
1010
1111
11
1011
1110
B j = $ "Gk
12
1100
1010
k= j
13
1101
1011
14
1110
1001
15
1111
1000
Gn"1 = Bn"1
Gk = Bk " Bk +1
!con k = 0,1,2,…,n-2
Da!codice binario a codice di Gray
n#1
con j = 0,1,2,…,n-1
!
Chapter 2 - Part 2
102
Rotation encoder
5 bits
8 bits
Chapter 2 - Part 2
103
XOR
 Tavola di Verità
X
Y
X XOR Y
0
0
0
0
1
1
1
0
1
1
1
0
 Proprietà
!
!
!
X "0= X
X "1= X
X " X =0
X " X =1
!
X "Y = X "Y
X "Y = X "Y
!
!
Chapter 2 - Part 2
104
XNOR
 Tavola di Verità
X
Y
X XNOR Y
0
0
1
0
1
0
1
0
0
1
1
1
X " Y = XY + XY = (X + Y ) # (X + Y ) = XX + XY + XY + YY
X " Y = XY + XY
!
!
Chapter 2 - Part 2
105
Calcolo della Parità
X " Y " Z = (XY + XY ) " Z
!
= (XY + XY ) "Z + (XY + XY ) " Z =
= (XY + XY )Z + (XY + XY ) " Z =
= XYZ + XYZ + XY Z + XY Z
!
Chapter 2 - Part 2
106
Calcolo della Parità (dispari)
X " Y " Z = (X " Y ) " Z
!
Chapter 2 - Part 2
107
Codice di Hamming
Distanza di Hamming tra due configurazioni di un codice è data
dal numero di bit che differisce tra le due configurazioni
Codici ASCII con Bit di Parità hanno configurazioni con
distanza di Hamming pari a 2 (rilevazione errore dispari)
Codice
Non Codice
0111010.0
0111010.1
0111011.1
0111011.0
0011011.0
0011011.1
Chapter 2 - Part 2
108
Codici
 Un codice è un insieme di simboli e regole per
rappresentare gli elementi di un insieme
 PAROLA (WORD): è una combinazione dei simboli
previsti dal codice
 Codice è Ambiguo se esiste una parola del codice che si
riferisce a due o più elementi dell’insieme
 Codice Ridondante: codice che utilizza un numero di
simboli maggiore di quello strettamente necessario per
rappresentare gli elementi dell’insieme
Chapter 2 - Part 2
109
Codici
Con n simboli nella base b, supponendo di avere un codice a
lunghezza fissata si dispone di bn configurazioni(parole).
Per la codifica di valori numeri si possono rappresentare i
valori tra
m =0
M = bn-1
Per valori non numerici si possono associare le bn parole di
codice ai valori da rappresentare (insieme S)
ln(# S)
n = log b (# S) =
ln(b)
bn=#S
il valore di n viene approssimato per eccesso!
!
Chapter 2 - Part 2
110
Codice di Hamming
Codice di Hamming (7,4)
p1p2d1p3d2d3d4
•La distanza tra due parole di codice è 3
•Corregge gli errori singoli
Chapter 2 - Part 2
111
Codice di Hamming 7,4
p1p2d1p3d2d3d4
p1=parità(d1,d2,d4)
p2=parità(d1,d3,d4)
p3=parità(d2,d3,d4)
d1d2d3d4
0101
p3 p2 p1
0 1 0
Chapter 2 - Part 2
112
Codice di Hamming 7,4
p1p2d1p3d2d3d4
d1d2d3d4
0101
1101
p3 p2 p1
0 1 0
0 1 0
0 0 1
0
p1=parità(d1,d2,d4)
p2=parità(d1,d3,d4)
p3=parità(d2,d3,d4)
1
1
Errore su d1
Ricalcolo la parità
L’errore è in
posizione 3
Chapter 2 - Part 2
113
Codice di Hamming 7,4
p1p2d1p3d2d3d4
d1d2d3d4
p3 p2 p1
0 1 0
Errore su d4
0 1 0
1 0 1 Ricalcolo la parità
0101
0100
1
p1=parità(d1,d2,d4)
p2=parità(d1,d3,d4)
p3=parità(d2,d3,d4)
1
1
L’errore è in
posizione 7
Chapter 2 - Part 2
114
Codice di Hamming 7,4
p1p2d1p3d2d3d4
d1d2d3d4
p3 p2 p1
0 1 0
Errore su p3
1 1 0
0 1 0 Ricalcolo la parità
0101
0101
1
p1=parità(d1,d2,d4)
p2=parità(d1,d3,d4)
p3=parità(d2,d3,d4)
0
0
L’errore è in
posizione 4
Chapter 2 - Part 2
115
Codice di Hamming 8,4
p1p2d1p3d2d3d4p4
p1=parità(d1,d2,d4)
p2=parità(d1,d3,d4)
p3=parità(d2,d3,d4)
p4=parità(p1,p2,p3,d1,d2,d3,d4)
•Corregge l’errore singolo
•Rileva l’errore doppio
Chapter 2 - Part 2
116
Terms of Use
 All (or portions) of this material © 2008 by Pearson
Education, Inc.
 Permission is given to incorporate this material or
adaptations thereof into classroom presentations and
handouts to instructors in courses adopting the latest
edition of Logic and Computer Design Fundamentals
as the course textbook.
 These materials or adaptations thereof are not to be
sold or otherwise offered for consideration.
 This Terms of Use slide or page is to be included within
the original materials or any adaptations thereof.
Chapter 2 - Part 2
117

Documenti analoghi