The one problem with interpreters (like VIM) is that they are slow.
In fact, VIM used about 400 instructions to interpret a single instruction. So your program ran 400 times slower than normal.
The early PCs and clones were running 8088 chips at 4.77Mhz. Since they weren't all that efficent (not like the single clock instructions of modern chips), you got less than 1 million instructions per second (MIPS). So, at it's fastest, your 1MIPS program was running at
1000000 / 400 = 2500 instructions per second
Yah, it was slow.
This meant that all sorts of tricks were used to get any little bit of speed out of it that you could.
One of the biggest tasks of the interpreter was instruction decoding. Taking a byte stream like
A3 34 12
and determinging that this was the instruction
MOV [1234],AX
If you work with this for a while, you realize that you are much better off if you have two versions of the instruction decoder:
1) produces readable strings and computer instructions
2) just produces computer instructions
Most of the interpreters processing will be internal. You are not (usually) going to be watching every instruction. If you don't produce strings for the user (case 2), it is much faster.
If you are going to be producing strings for the user (for example, tracing a critical code section), you are not going to care (in general) how fast the routine is (the user can't keep up with the computer).
So, you trade code space for speed. A common trade-off.
You just have to be VERY SURE that both decoders are "in sync" so you can switch between them without anything breaking (or interpreting instructions incorrectly...)
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment