• Shuffle
    Toggle On
    Toggle Off
  • Alphabetize
    Toggle On
    Toggle Off
  • Front First
    Toggle On
    Toggle Off
  • Both Sides
    Toggle On
    Toggle Off
  • Read
    Toggle On
    Toggle Off
Reading...
Front

Card Range To Study

through

image

Play button

image

Play button

image

Progress

1/17

Click to flip

Use LEFT and RIGHT arrow keys to navigate between flashcards;

Use UP and DOWN arrow keys to flip the card;

H to show hint;

A reads text to speech;

17 Cards in this Set

  • Front
  • Back

What does a translator do?

A translator converts programs in one language to another.

What does an interpreter do?

An interpreter carries out a program instruction by instruction.

What is a virtual machine?

A virtual machine is a conceptual machine, one that doesn't exist.

2. What is the difference between interpretation and translation?

An interpreter executes a program by fetching the first instruction, carrying it out, then fetching the next one, and so on. A translator first converts the original program into an equivalent on in another language and then runs the new program.

3. Is it conceivable for a compile to generate output for the microarchitecture level instead of for the ISA level? Discuss the pros and cons of this proposal.

It is possible, but there are problems. One difficulty is the large amount of code produced. Since one ISA instruction does the work of many microinstructions, the resulting program will be much bigger. Another problem is that the compiler will have to deal with a more primitive output language, hence it, itself, will become more complex. Also, on many machines, the microprogram is on ROM. Making it user-changeable would require putting it in RAM, which is much slower than ROM. On the positive side, the resulting program might well be much faster, since the overhead of one level of interpretation would be eliminated.

Can you imagine any multilevel computer in which the device level and the digital logic levels were not the lowest levels? Explain.

During the detailed design of a new computer, the device and digital logic levels of the new machine may well be simulated on an old machine, which puts them around level 5 or 6.

Consider a multilevel computer in which all the levels are different. Each level has instructions that are m times as powerful as those of the level below it; that is, one level r instructions can do the work of m level r - 1 instructions. If a level 1 program requires k seconds to run, how long would equivalent program take at levels 2, 3, and 4, assuming n level r instruction are required to interpret a single r + 1 instruction?

Each additional level of interpretation slows down the machine by a factor of n/m. Thus the execution times for levels 2, 3, and 4 are kn/m, kn^2/m^2, and kn^3/m^3, respectively.

Some instructions at the operating system machine level are identical to ISA language instructions. These instructions are carried out directly by the microprogram rather than by the operating system. In light of your answer to the preceding problem , why do you think this is the case? (Preceding problem: If a level 1 program requires k seconds to run, how long would equivalent program take at levels 2, 3, and 4, assuming n level r instruction are required to interpret a single r + 1 instruction?)

Each additional level of interpretation costs something in time. If it in not needed, it should be avoided.

Consider a computer with identical interpreters at levels 1, 2, and 3. It takes an interpreter n instructions to fetch, examine, and execute one instruction. A level 1 instruction takes k nanoseconds to execute. How long does it take for an instruction at levels 2, 3, and 4?

You lose a factor of n at each level, so instruction execution time at levels 2, 3, and 4 are kn, kn^2, and kn^3, respectively.

In what sense are hardware and software equivalent? Not equivalent?

Hardware and software are functionally equivalent. Any function done by one, in principle, can be done by the other. They are not equivalent in the sense that to make the machine really run, the bottom level must be hardware, not software. They also differ in performance.

9. Babbage's difference engine had a fixed program that could not be changed? Is this essentially the same thing as a modern CD-ROM that cannot be changed? Explain your answer.

Not at all. If you wanted to change the program the difference engine ran, you had to throw the whole computer out and build a new one. A modern computer does not have to be replaced because you want to change the program. It can read many programs from many CD-ROMs.

10. One of the consequences of von Neumann's idea to store the program in memory is that programs can be modified, just like data. Can you think of an example where this facility might have been useful? (Hint: Think about doing arithmetic on arrays.)

A typical example is a program that computes the inner product of two arrays, A and B. The first two instructions might fetch A[0] and B[0], respectively. A the end of the iteration, these instructions could be incremented to point to A[1] and B[1], respectively. Before indexing and indirect addressing were invented, this was done. (note: this answer seems to be missing something)

The performance ratio of the 360 model 75 was 50 times that of the 360 model 30, yet the cycle time was only five times as fast. How do you account for this discrepancy?

Raw cycle time is not the only factor. The number of bytes fetched per cycle is also a major factor, this increasing with the larger models. Memory speed and wait states play a role, as does the presence of caching. A better I/O architecture causes fewer cycles to be stolen, and so on.

Two basic system designed are shown in Figure 1-5 (The original von Neumann machine) and 1-6 (The PDP-8 omnibus). Describe how input/output might occur in each system. Which one has the potential for better overall system performance?

The design of Figure 1-5 does I/O one character at a time by explicit program command. The design of Figure 1-6 can use DMA to have the controller do all the work, relieving the CPU of the burden, and thus making it potentially better.

Suppose that each of the 300 million people in the U.S. fully consumes two packages of goods a day bearing RFID tags. How many RFID tags have to be produced annually to meet that demand? At a penny a tag, what is the total cost of the tags? Given the size of GDP, is this amount of money going to be an obstacle to their use on every package offered for sale?

Each person consumes 730 tags per non-leap year. Multiply by 300 million and you get 219 billion tags a year. At a penny a tag, they cost $2.19 billion dollars a year. With GDP exceeding $10 trillion, the tags add up to 0.02% of GDP, not a huge obstacle.

Name three appliances that are candidates for being run by an embedded CPU.

The following appliances are normally controlled by embedded systems these days: alarm-clock radios, microwave ovens, television sets, cordless telephones, washing machines, sewing machines, and burglar alarms.

. At a certain point in time, a transistor on a microprocessor was 0.1 micron in diameter. According to Moore's law, how big would a transistor be on next year's model?

According to Moore's law, next year the same chip will have 1.6 times the size of this year's transistors. Since the area goes like the square of the diameter, the diameter of next year's transistors must be 0.079 microns.