Karnaugh Maps: An Essential Tool for Understanding Complex Digital Logic Designs

  • Context: Undergrad 
  • Thread starter Thread starter Kenneth Mann
  • Start date Start date
  • Tags Tags
    Tutorial
Click For Summary
SUMMARY

The discussion centers on Karnaugh Maps (K-Maps), a graphical method for simplifying Boolean equations in digital logic design, first introduced by M. Karnaugh in 1953. It details the structure of K-Maps, which consist of adjacent rectangular cells representing logical states, and explains how to minimize Boolean expressions using these maps. The conversation also highlights the evolution of K-Maps, including the Mahoney Map, which allows for the handling of more than four variables. Practical examples illustrate the application of Boolean axioms, particularly Postulate 5, in simplifying logic expressions.

PREREQUISITES
  • Understanding of Boolean algebra and its axioms
  • Familiarity with digital logic design concepts
  • Knowledge of truth tables and their construction
  • Basic skills in graphical representation of data
NEXT STEPS
  • Study the construction and application of Mahoney Maps for complex logic designs
  • Learn about advanced Boolean simplification techniques beyond K-Maps
  • Explore the use of K-Maps in practical digital circuit design scenarios
  • Investigate software tools for automating K-Map generation and simplification
USEFUL FOR

This discussion is beneficial for digital logic designers, electrical engineers, computer scientists, and students studying digital systems who seek to enhance their understanding of Boolean simplification techniques and K-Maps.

  • #31
30. Synchronous Circuit Example - - - 1 T0 12 Counter

30. Synchronous Circuit Example - - - 1 T0 12 Counter​
First, we note that figure 53 is of the BCD counter developed in the last insertion, but which we did not have space to show there.

Now, we look at a counter which one person wanted to develop a couple of months back. This counter will count up to and include the value "12", but will not include the "0" value. If we want to know where such a counter would be used, we need only look at the standard "hour clock". It starts at the "One Oclock" value, and counts up to and past the "Twelve Oclock" value (though though the values from :"Twelve" to "One" are actually handled as the "Zero" values).

This counter is designed in a way much like that of the BCD counter. Again, four flip-flops are used, and we simply determine where these are to be toggled. Using the truth table the same way as before, it is filled out as is shown in figure 54. (Again, the lowest-order flip-flop simply toggles every time, and as such, is a trivial case and is not included in the truth-table. For each of the other flip-flops, and for the "carry out", we mark the place in the table (before) where the associated flip-flop is to be toggled.
This gives us the 'functions' (Jb, Jc, Jd and Co) which will go to the respective flip-flop J and K inputs, and enable them for toggling at the proper times.

The table values are entered directly into their respectivr Karnaugh Maps, and solved. The maps are shown in figure 55. The logic diagram will be included in the next insertion.

KM
 

Attachments

  • Figure 53.jpg
    Figure 53.jpg
    21.5 KB · Views: 492
  • Figure 54.jpg
    Figure 54.jpg
    69.5 KB · Views: 466
  • Figure 55.jpg
    Figure 55.jpg
    54.4 KB · Views: 579
Mathematics news on Phys.org
  • #32
31. One To Twelve Counter - - Part 2

31. One To Twelve Counter - - Part 2​
The basic logic for the 1-to-twelve counter that we have derived, is shown in figure 56.

One additional caution should always be exercised when designing sequential logic, especially where "don't-care" states are used. We should always check afterward to make sure that no "blip", like a power interruption or line spike can put us into a state from which we cannot get out. Normally, this won't happen, but we should check just to make sure. Obviously, if such a 'blip' occurs, our circuit will be thrown out of sequence; there's nothing we can do about that since logic circuitry is dependent upon its power sources, etc., but we do want to be able to recover afterward.

To check the circuitry we must assume the circuit in every possible state, and trace its actions via the logic diagrams. The "State Diagram" shown in figure 57, shows us where the 1-to-12 counter goes during the transition from each 'state' (count), and thereby we see that this unit will recover on its own, (We already knew by definition where the count would go after each 'allowed state'.) Our state diagram also shows us that this counter is a simple "state machine" (Moore Machine), however that topic we shall consider beyond the scope of this discussion. (As an example, such a discussion would have to factor in the types of flip-flops and their "excitation table" characteristics, and whether there are external inputs, etc.)

KM
 

Attachments

  • Figure 56.jpg
    Figure 56.jpg
    26.6 KB · Views: 472
  • Figure 57.jpg
    Figure 57.jpg
    23.2 KB · Views: 506
  • #33
32. Base Six Counter

32. Base Six Counter​

The Counter illustrated in this insertion is configured in the same manner as the others. It is intended to handle count values "0" through "5". Its truth-table is given in figure 58, its Karnaugh Maps are given in figure 59 and its diagram is in figure 60. It is left to the reader to go through the design derivation.

KM
 

Attachments

  • Figure 60.jpg
    Figure 60.jpg
    16.3 KB · Views: 468
  • Figure 59.jpg
    Figure 59.jpg
    58.1 KB · Views: 475
  • Figure 58.jpg
    Figure 58.jpg
    66.9 KB · Views: 481
  • #34
33. A Time-Of-Day Clock

33. A Time-Of-Day Clock​

This insertion is included simply to give a hint as to how the count modules shown in the previous insertions might be used in a "real world" type application. In other words, this tells us how we might build a twelve-hour clock unit like the ones that been offered at very low cost for the last twenty or more years. Figure 61 illustrates such a clock, and gives us an indication of just how simple it is. It drives simple BCD displays to give us a five-digit seconds-minutes-hours indication, being updated by a once-per-second count. The lowest (leftmost) BCD counter updates once per second and puts out its 'carry' once every ten seconds, This carry goes to the next (base six) counter at each of those ten-second intervals and enables that counter to update. Its output then enables the next (BCD) counter to update every 60 seconds. This is repeated to the final (one to twelve) counter, and its output carry then goes to the (toggle) inputs of a standard flip-flop, which toggles back and forth once every twelve hours, to give us the "AM/PM" indicator. Also shown is a "clock load" circuit block diagram. This may be something as simple as a set of 'advance' buttons similar to what most digital clocks now have; to a moderately more complex 'keypad type entry; to an even more sophisticated interface to WWV (very accurate) or GPS (even more accurate).

KM
 

Attachments

  • #35
34. Finding SPOS solutions.

34. Finding SPOS solutions.​

It was mentioned earlier (in insertion # 8) that the Karnaugh Map is primarily a tool devised with the aim of obtaining Simple-Sum-Of-Products (SSOP) solutions. As a final topic, we will show here that it can also be used to derive Simple-Product-Of-Sums (SPOS) solutions.

The rules for deriving SPOS solutions are as follows:

1) Determine the basic functions and enter them into maps as we have been doing all along.

2) Group the empty (zero) cells (plus any desired "don't cares"), thus yielding the SSOPs for f ' (f-bar).

3) Determine the SSOP solution terms for f '.

4) Then, to get "f ", complement both sides of the resulting equations.

5 ) Apply De Morgan's Theorems

That's all there is to it. As an example, consider the function with the following minterms:

Y = A'B'C'D' + A'B'CD' + A'B'CD + A'B'C'D + AB'CD'

The Karnaugh mapping would be that shown in figure 62, so when we group the empty cells, we get that shown in figure 63. The resultant SSOP is:

Y' = B + AC'

Y = (B + AC')'
Y = B'(AC')'

Y = B'(A' + C) [I hate using the apostrophe in place of the "overline"]

If we had sought the SSOP solution, the groupings would resemble those of figure 64, and the SPOS solution would be:

Y = A'B' + B'C

In this case, if we factor the SPOS answer we get the same as the SSOP, however the conversion will not always be this easy.


KM
 

Attachments

  • Figure 64.jpg
    Figure 64.jpg
    42 KB · Views: 533
  • Figure 63.jpg
    Figure 63.jpg
    42.2 KB · Views: 459
  • Figure 62.jpg
    Figure 62.jpg
    38.2 KB · Views: 512
  • #36
35. SPOS Use Example

35. SPOS Use Example​

If we look back to the example of insertion # 22, we get a case in which use of the SPOS might be slightly preferable to the SSOP, and which indicates that it is prudent to check out both alternatives. In Figure 37 this map was marked and then solved for the SPOS implementation, for which the circuitry is shown in figure 38. Now, we shall solve this same map for the SPOS implementation, and that grouping is shown in figure 65, along with the resulting SPOS derivation of:

W = (A ' + C ' )(A + B)

The resulting circuits for the SPOS are then shown in figure 66, beneath those for the original SSOP implementation. The two functional depictions show a pretty much equivalent set, however the two "practical" implementations seem to favor the SPOS case. This implementation apparently allows us to do without a couple of inverters, however this may not be the advantage in reality that it initially appears to be. It all depends on the surrounding circumstances. For example, if the signals come from flip-flops, the inverted signals are probably already available along with the uninverted ones. Also the available selection of gate types is a consideration. Both alternatives should be examined, and then the choice made.

KM
 

Attachments

  • Figure 65.jpg
    Figure 65.jpg
    41.6 KB · Views: 452
  • Figure 66.jpg
    Figure 66.jpg
    23.2 KB · Views: 570
  • #37
36. SPOS Decode For BCD Counter

36. SPOS Decode For BCD Counter​

Here we look at another example; in this case, the decode logic for the BCD counter derived in insertion # 29. We will derive the SPOS logic needed to drive each flip-flop, and compare each to its SSOP counterpart. The grouping for each case is shown in Figure 67.

1) In the first grouping, to derive the SPOS logic for "Jb", we circle all the "empty" spaces (plus a couple of "don't cares), which are all in this case enclosed in a green circle. This grouping, which gives us the logic for [Jb '], then yields the equation:

Jb ' = A ' therefore
Jb = A

This is identical to what we got for the SSOP case (which we could have anticipated in this case, because result yields trivial logic), so this logic stays the same.

2) In the second grouping, to derive the SPOS logic for "Jc", we circle all the "empty" spaces (plus some "don't cares), in this case enclosed with two circles, one green and one red. This grouping, which gives us the logic for [Jc '], then yields the equation:

Jc ' = A ' + B ' therefore
Jc = (A ' + B ' ) '
= AB

Again, it is identical (because again it reduced to less than SPOS or SSOP), so again, the logic is the same.

3) In the third grouping, to derive the SPOS logic for "Jd", we circle all the "empty" spaces (plus some "don't cares), in this case enclosed in three circles, one green, one red and one dark blue. This grouping, which gives us the logic for [Jd '], then yields the equation:

Jd ' = A ' + BC ' + B ' D ' therefore, we get
Jd = (A ' + BC ' + B ' D ' ) '
= A (BC ' ) ' (B ' D ' )'
= A (B ' + C)(B + D)

This time, we do get a full SPOS representation, so we compare it to the SSOP case (Jd = AD + ABC), and the result appears to be pretty much a "wash" (at least gatewise).

In the fourth grouping, to derive the SPOS logic for "Co", we circle all the "empty" spaces (plus some "don't cares), in this case enclosed with two circles, one red and one green. This grouping, which gives us the logic for [Co '], then yields the equation:

Co ' = D ' + A ' therefore
Co = (A ' + D ' ) '
= AD

Again, we get a trivial case, which is the same as for the SSOP. It should be noted (again), however that the results will not always be identical, as they were in three of the four cases here. It is still advisable to check both ways.

KM
 

Attachments

  • Figure 67.jpg
    Figure 67.jpg
    48.5 KB · Views: 458
  • #38
37. SPOS For The 3 X 3 Multiplier

37. SPOS For The 3 X 3 Multiplier​

One final example of a SPOS case, will be used to illustrate a more complex example. In this case, we deal with the "3 X 3 Multiplier" that we derived back in insertions # 25 and #26. The maps for the variables from that example, which were shown in figures #45 and 46, are again shown here in figures 68, 69 and 70 - - - but this time being solved for SPOS, so that here the "zeros" (or empty spaces are grouped).

In figure 68 we seek the SPOS for "P", so here we see that we can group the "zeros" as is shown with the green circled grouping and the red circled grouping. From this we get:

P ' = A ' + D ' therefore
P = (A ' + D ' ) '
= AD
and this is the same (trivial) case as with the SSOP.

Also, in figure 68, we seek the SPOS for "R". Here we can get the following five groupings:
RED: D ' E '
Green: A ' B '
Orange: B ' E '
Purple Hatched: A B D E
Black: A ' B D ' E
Here, we can see that there are four terms, and fourteen instances of the variables, thus it appears reasonable in this case, that use of the SPOS will probably not give an advantage over the SSOP. (We could work it out, but here there doesn't appear to be an advantage to that.)

In figure 69, we first seek the SPOS for "S". Here, the attempt shown has eleven groupings:
Green: D ' E ' F '
Red: A ' B ' C '
Green Hatched: A ' B C ' E '
Red Hatched: B ' D ' E F '
Orange: A ' B ' D ' E
Orange Hatched: A ' B D ' E '
Black Hatched: A ' B ' D ' E '
Black: A C D F
Purple Hatched: A C ' D F '
Purple: A ' B C D E
Blue: A B D ' E F
Here, there are more terms (11 to 9) and more variable iterations (42 to 36), so there's probably not going to be much advantage here, if any so we don't go on.

When we seek the SPOS for "T" we have the following eleven groupings:(Two are not circled).
Black Hatched: B ' D ' E '
Purple Hatched: A ' B ' E '
Orange: C ' D ' F '
Black: A ' C ' F '
Red: E ' F '
Red Hatched: A C ' E F
Green: B ' C '
Green Hatched: B C D F '
Gray: A C D E F (cells 61 & 63)
(uncircled) A B C D F (cells 47 & 63)
(uncircled) A ' B C D ' E F (cell 54)
Again, there doesn't appear to be much advantage to going on.

In figure 70, we are seeking the SPOS for the functions "V" and "X". Here, again once the terms are determined, and there appears to be little advantage to using the SPOS.

KM
 

Attachments

  • Figure 68.jpg
    Figure 68.jpg
    42.1 KB · Views: 482
  • Figure 69.jpg
    Figure 69.jpg
    41.6 KB · Views: 492
  • Figure 70.jpg
    Figure 70.jpg
    45.6 KB · Views: 560
  • #39
Finally - - -

Finally - - -​

At this point, this series of insertions on Karnaugh Mapping is ended. I hope that these have been of use and will serve to show some of the power and flexibility of using the ordered reflective maps as were demonstrated in these series.

There are no more discussions intended on this subject, however if interesting examples are found from time to time, these may be added. I hope that anyone who comes up with interesting applications will also do so. We might start a collection of applications, either in this of another string. After all, this is an applied subject.

I am considering other possible strings, also if any can be sufficiently developed. Presently the only one I have worked out at all is on the subject of "state machines", and I'm not sure that it would be applicable here. Last year, I wrote up quite a bit on 'computer design and organization', but then the machine that I was using turned into an inert object and I haven't had time or money yet to repair it, so until I do, that one will remain "shelved".

Also, any comments, additions and corrections would be appreciated. (I am aware that "figure 64 is incorrect. There are also a couple of errors in that insertion.) Any constructive input is welcome.

KM
 
  • #40
may i know how can i find the next state from the kv map that the kv map for its first condition is given??
 
  • #41
teng125 said:
may i know how can i find the next state from the kv map that the kv map for its first condition is given??


What are you looking for? Your question is not clear. In general, a K-Map only gives the minterms of a Boolean equation, and from these allows us to combine them for minimization purposes.

Are you looking to derive State conditions for State Machines? (The K-Map can be adapted to become State Tables, but this is another subject altogether, and requires a derivation of its own - - and a lot of explanation - - more than I care to go into at this time.) If so, you would then need to state your conditions a lot more completely.

KM
 
  • #42
Combinatorial Circuit Example - - - Trinary System Adder Element
1) D and C are the "variables" for the addend.
2) B and A are the "variables" for the augend.
3) E is the variable for the carry in.
4) Y and X are the functions for the sum.
5) Z is the function for the carry out.

What is the use of F variable? Please describe it?
 
  • #43
atharsani said:
Combinatorial Circuit Example - - - Trinary System Adder Element
1) D and C are the "variables" for the addend.
2) B and A are the "variables" for the augend.
3) E is the variable for the carry in.
4) Y and X are the functions for the sum.
5) Z is the function for the carry out.

What is the use of F variable? Please describe it?

It is unused. It is not needed. Five variables are needed in this case, but maps are usually drawn with an even number of variables, so the uppermost variable representation is ignored, and that part of the truth table and the map can be left blank.


KM
 
  • #44
Dear Sir,
The mapp of Y is not given correct because in truth table output of y at 26 (011010) is 1 but in mapping 1 is at 27. pls sir check it.
 
  • #45
atharsani said:
Dear Sir,
The mapp of Y is not given correct because in truth table output of y at 26 (011010) is 1 but in mapping 1 is at 27. pls sir check it.

You are corrrect!

Both the "1" in 26 and the "Don't Care" in 27 got shifted to the left by one. The result, when they are put in the correct place is that the term AC'E becomes BDE + AC'D'E.
(Note that all the F' values should have left out. After all, F isn't used. The fact that they are all F' should have told me that.) The final result, then is:

X = AC'D'E' + A'B'CE' + BDE' + AC'DE + BCE + A'B'C'D'E
Y = BC'D'E' + ACE' + A'B'DE' + BDE + AC'D'E + A'B'CE
Z = BC + AC'D + BD + DE + A'BE + ACE

Thanks for the correction. I was very pleasantly surprised to find someone who was interested enough to understand the subject well, and to go through it closely enough to find those errors.

KM
 
  • #46
thanks , your tutorial is great!
 
  • #47
I have been hoping that someone would initiate a new string - - expanding the methods used in this string to illustrate a series on "State Machines". It is a simple step. With very little modification, the tables and maps can be extended to them.

KM
 
  • #48
Thanks a lot for your tutorial.

I have a question! How am I supposed to draw/build a ten variable Karnaugh Map?
 
  • #49
Rainier9 said:
Thanks a lot for your tutorial.

I have a question! How am I supposed to draw/build a ten variable Karnaugh Map?

Six variables horizontally, and four vertically. This would take probably at least an 11" X 17" sheet and a lot of work (both the drawing and the use). I have done it so I know. Sorry it took this long.

KM
 
  • #50
Thanks for answering! I actually got a better way to 'build' my circuit.
 
  • #51
Sad to say but ...

Karnaugh maps became an obsolete intellectual curiosity when vacuum tubes and relays were replaced by dense semiconductor logic in the late 60's. Hardware is cheap and minimization is not usually worth the effort except in certain cases such as a cell for some memories that must be repeated many many times.

If I am wrong about this, I would love to be educated.
What practical use are they today ?

I remember having much fun learning them in Engineering school in the 60's.

You should put out a book / pamphlet with your articulation here.

Very nice
Thank you
 
  • #52
paulfr said:
Hardware is cheap and minimization is not usually worth the effort

I'm no expert on the digital design of today but I think your argument that logic minimization is no longer worthwhile is wrong, paulfr.
In many cases it the minimization may be performed software in a step called logic synthesis.
 
  • #53
Point well taken

When I said "not worth the effort" , I meant by hand, yes.

Of course design automation SW will implement a minimization
step in a way specified with constraints called out by the designer.
Min power, min silicon real estate, maximum speed, etc
 
  • #54
Kenneth Mann said:
Six variables horizontally, and four vertically. This would take probably at least an 11" X 17" sheet and a lot of work (both the drawing and the use). I have done it so I know. Sorry it took this long.

KM

I hope that you understand that in real world minimization is not done by Karnaugh maps.
 
  • #55
paulfr said:
Sad to say but ...

Karnaugh maps became an obsolete intellectual curiosity when vacuum tubes and relays were replaced by dense semiconductor logic in the late 60's. Hardware is cheap and minimization is not usually worth the effort except in certain cases such as a cell for some memories that must be repeated many many times.

If I am wrong about this, I would love to be educated.
What practical use are they today ?

I remember having much fun learning them in Engineering school in the 60's.

You should put out a book / pamphlet with your articulation here.

Very nice
Thank you

Yours are wonderful observations, but I couldn't disagree more. First, if you've followed these insertions, minimization itself is not the main thrust. The main use is to define logic relations that accomplish the desired tasks without design blow-ups. Little attempt was made to determine whether the end products were minimum or not - - though this is desired. The interest was simply to get the job done with the least extended effort and the greatest understanding of the end product.

Today, the design effort is often devoted to very large dense packages, and the essential ingredient is toward understanding what is going on within those packages. We use Hardware Definition Languages to help us get through the difficulty and complex accounting involved in designing those packages. These allow us several shortcut approaches, but in the end they are simply alternative means of laying out the real-estate and defining the interrelations needed to accomplish our logic schematics, and as such, are totally interchangeable if we desired to do so. We specify our preferred constructs and approaches and , with luck, the software lays out the real-estate using the logic substitutions that are pre-defined in the definition of the languages - - according to how the language builders determined their preferred approaches.

It is important, however, that the designer still understand what goes into a logic design - - and not simply minimization, but what makes it work. The design aids won't do the designing for you; they simply make it more convenient. Without the understanding of the underlying relationships as can be worked out by mapping (and other approaches to symbolic logic) Complex designs become much more difficult. You wind up with a few days or weeks of design lay-out, followed by months or even years of trouble-shooting.

KLM
 

Similar threads

  • · Replies 3 ·
Replies
3
Views
4K
Replies
2
Views
2K
  • · Replies 10 ·
Replies
10
Views
9K
Replies
17
Views
5K
  • · Replies 4 ·
Replies
4
Views
4K
  • · Replies 52 ·
2
Replies
52
Views
7K
  • · Replies 21 ·
Replies
21
Views
6K
Replies
29
Views
5K
  • · Replies 6 ·
Replies
6
Views
4K