Introduction
I've worked with a number of computer languages over the years, and nowadays they are inventing so many that you'd think we could write the ultimate computer language and be done with it. Of course that'd put the language creators out of a job, so that's not going to happen. There's something of a conflict of interest there.
Anyway, I thought I'd just go through some of the computer languages that I have used over the years, to contrast and compare. There's no point in me talking about languages that I haven't used, and no point in telling me to use a different language. Once you are given a product to work on, you use the language it's already written in. Re-writing a lot of working code in a new language seems somewhat pointless. Some people say: "Why didn't you just write it in Angular Python Sharp Plus?", leading to the response: "Cos I've never heard of it!" or "We don't have those tools."
High level and Low level Languages
Computers can only run machine code, that is the language of the Central Processing Unit (CPU). There are many different types of CPU, and these have been getting more sophisticated and speedier as chip development becomes more complex. Machine code is an interpretation of one or more bytes of data as an instruction. CPUs have a number of registers, or onboard variables, and can see the available RAM as address-space. Each byte in the RAM has an address, from 0 through to the end of RAM. Computers can have less RAM than address space, leaving a void after the RAM, or can have more RAM than address space and have to select which parts of the RAM can be seen at any one time. There are also some special addresses that can control other chips in the computer or read data from devices, such as joysticks.
Machine code instructions are generally very simple, typically reading a value from memory into a register, writing a value from a register into memory, or performing some arithmetic or logical operation on a register value. There are also flow control instructions allowing you to test a value and then jump to a new set of instructions based on the result, or you can call a subroutine of instructions and then return back.
Assembly Language
Assembly Language is a low-level language with a one-to-one correspondence between an assembler instruction and a machine code instruction. We use a program called an assembler to convert assembler into machine code. We can also use a disassembler program to view machine code as assembler. Assembler is very fast and precise, and also completely CPU-specific, and as soon as a you refer to a hardware register or call anything in the Operating System (OS) then you are machine-specific. That is the weakness of assembler, you are usually writing for one platform only. Conversion to another platform could easily require every single line of code to change.
One peculiarity of the assemblers which lives on in languages today is that the language designers can't decide whether data should go from left to right or right to left. For instance, in 6502 and 6809 you might say:
LDA #0
which says: put a 0 value into the A register. The data has gone from the right to the left.
Z80 would do it as:
ld a,0
which you would read as load the a register with zero, also right to left.
Then, along comes 68000 and this happens:
moveq #0,d0
and the data is going from left to right: move quick zero into the d0 register.
Those examples are not reversible, because you can't put the A register into a zero value, but on the 68000 again you can:
move.l d0,d1
and you start thinking: is that move d0 to d1, or move d1 to d0? Without other lines around to remind you, it is ambiguous on its own. It's the former.
Similarly in Z80:
ld (hl),a
is that write out the a register to where hl is pointing, or load a with what hl is pointing to? If you don't know, it's not a good idea to guess. It's the latter.
Other languages are more explicit but the direction is again changing:
10 LET A = 0
MOVE ZERO TO A.
A = 0; not to be confused with A == 0, which does nothing for A, but you might get a compiler warning, if you're lucky. How many of these did I find?
One peculiarity of the assemblers which lives on in languages today is that the language designers can't decide whether data should go from left to right or right to left. For instance, in 6502 and 6809 you might say:
LDA #0
which says: put a 0 value into the A register. The data has gone from the right to the left.
Z80 would do it as:
ld a,0
which you would read as load the a register with zero, also right to left.
Then, along comes 68000 and this happens:
moveq #0,d0
and the data is going from left to right: move quick zero into the d0 register.
Those examples are not reversible, because you can't put the A register into a zero value, but on the 68000 again you can:
move.l d0,d1
and you start thinking: is that move d0 to d1, or move d1 to d0? Without other lines around to remind you, it is ambiguous on its own. It's the former.
Similarly in Z80:
ld (hl),a
is that write out the a register to where hl is pointing, or load a with what hl is pointing to? If you don't know, it's not a good idea to guess. It's the latter.
Other languages are more explicit but the direction is again changing:
10 LET A = 0
MOVE ZERO TO A.
A = 0; not to be confused with A == 0, which does nothing for A, but you might get a compiler warning, if you're lucky. How many of these did I find?
Macro Assembly
Many assemblers have a macro capability. This allows us to define new instructions that may be combinations of other macros or real instructions. You can then make the assembler a little more readable and compact, and I've used that feature to create a completely new language with which I defined all of the control routines for the game elements with Rainbow Islands, Paradroid 90, Fire & Ice, and Uridium 2. This in turn made for multi-platform implementation, including taking the Amiga version and making the Sega Saturn, PlayStation and PC versions. We still had to write all the plot routines specifically for the machine, but the data interpreter was written in assembler on the Amiga, and C on the others.
8-Bit Assemblers
In 1983 we had 3 different CPU chips in 3 leading computers: 6502 in the C64, 6809 in the Dragon 32, and Z80 in the Spectrum. In terms of number of registers, the 6502 had the least, and the Z80 had the most. The amount of RAM in each machine was also different. This made supporting multiple platforms more time-consuming. When the Amstrad CPC 64 came along with its Z80, the logical donor was the Spectrum.
My approach to conversion from the 16K Spectrum to the 32K Dragon 32 was one of converting the routines function by function, and not instruction by instruction. I had less registers to work with but more RAM. I couldn't look at any Z80 source because Steve Turner was working in machine code and not assembler, since he didn't have a suitable Z80 assembler at the time. Steve did a structure diagram of the game and explained how it all worked. I was then free to implement the functionality by writing the code to best suit the CPU. I knew that I wouldn't run out of RAM, and indeed had some spare to add some additional graphics and features.
I then got to convert Lunattack to the C64. This time I had 6809 source code to work from, and all the experience of having written the game once. I had more RAM again, and 1 less register, so I still had to use the same philosophy of conversion. I also added more features again with the spare space, and used the hardware sprites where possible. Inspired by Alien and James Bond, the game began with the ship waking up and displaying messages ever faster. These messages were all largely unimportant, indeed the last almost unreadable message was: Cigar Lighter: On. It was all done in a light-hearted manner, but someone still complained that they didn't have time to read the messages.
8-Bit Tricks: Double-Entry Code
This is an 8-bit assembler trick only, I suspect, and is not for the faint-hearted. Back in the days of the Commodore VIC-20 there was only 3.5K of RAM to implement your game. That would be the game code, the graphics, the sound data, everything.
If anyone has ever run a disassembler on some code on an 8-bit machine, you might notice that it sometimes displays the code incorrectly, especially if you've started in the middle of some code. That's because the disassembly didn't start on an instruction, but on one of the following parameter bytes, which it tried to interpret as an instruction, so the disassembly is misaligned.
Now what if you could use that to your advantage, and ensure that the following byte can also be interpreted as a valid instruction to do something different, and carry on doing that for a few more instructions? You might want to bring the two subroutines together so that you just have different starts, or they might be completely independent. You would need to be clever about which variables you use in one strand to give you the right instructions in the other.
Personally I have never tried this, I've always had at least 32K of RAM, and that has proved to be enough. You could always find a graphic that you didn't need rather than torture yourself with this kind of madness, but I was led to believe that a certain Ox-like programmer had used it.
8-bit tricks: Self-modifying Code
Now here's one I have used. Should you find yourself needing the result of a nasty test a few times in a time-sensitive subroutine, then you might consider working out the answer and then setting a flag. Now all us structured programmers know that flags are the source of all evil, and should never be used. The wily assembler programmer might just change the program later on to choose the best path. That's pretty much a flag by any other name, but it's actually changing the code in some way. You might be adding a different value, or actually changing an add into a subtract, or indeed by-passing some code. It's a very naughty thing to do, and must really wind up any programmer who is converting your code to a cartridge-based platform, where self-modifying code doesn't work, at all! Nevertheless, just saving one instruction inside a loop can save some valuable time.
16-Bit Assemblers
When the Atari ST and the Commodore Amiga came out we actually had 2 computers with the same amount of RAM and the same CPUs. With 512K of RAM minimum we could see that at last we didn't have to worry about conversions. Game development time was necessarily going to increase with the larger space for programs and graphics, and the extra skills needed with larger palettes of colours. Of course the supporting graphics chips were quite different, but that's another story.
The 68000 CPU chip had 16 32-bit registers and felt like a real step up. Suddenly we weren't going to run out of registers. We thought about the best way to use the new chip, and decided that we should reserve a CPU register for always pointing at the hardware registers. This made all of the accesses to the hardware shorter address+offset format instructions. The central core of our games was what we called our Alien Manoeuvre Program system. Every game object was running through this system so we wanted it to be as efficient as possible. We reserved another register for the current game object so we could again address it's structure elements by address+offset. We also reserved some data registers for parameters coming into control routines, and another for a return value. This meant that all of the control routines, and there were a couple of hundred, could rely on addresses and values already being ready in registers and not have to be loaded.
Linked lists come into play at about this time. You can have single links going forwards, double links going forwards and backwards, circular lists or terminated lists. I expect there are others. Sometimes a terminated single link list is all that you need. You have a variable point to the address of the first element, say your first game object, and it then contains a pointer to the next one, or a zero value if it's the last. For a circular list the end pointer would not be zero, but would point back to the first object. You then have to recognise that you've hit the end by remembering where you started.
Linked lists allow you to access elements quickly because you keep all the active ones in one list and the dead ones in another list. You "unplug" elements from one list and plug them into the other. This saves going through an array and having to test for dead or alive first. You can also link objects together in additional ways with more pointers. For example, we kept a series of 16 different plot layers so we put all the plotted objects we wanted at the back into the lower layers, and all the front objects in the upper layers.
16-bit Tricks
As mentioned above, reserving certain registers for a specific pointer that never changes can save time. Having a register point to the hardware registers helps in using short offset addressing off a pointer rather than absolute pointers for every register. It also means that this register is immediately available to interrupt routines and it doesn't need to be stacked, loaded, nor restored. Of course you do have to set the register up before you set up any interrupts, and you must never ever change the value in that designated register. EVER!
Our Alien Manoeuvre Program language was such a fundamental part of our 16-bit games, from Rainbow Islands to Virocop, that we allowed it to use registers for specific purposes for all of its subroutines, enabling it to run quickly even though it was an interpreted language.
We used assembler macros to generate a jump table of functions and data that indexed quickly into that tables, including the ability to generate parameters. We could also use names for those parameters to ensure valid data or vet values for correct range, It was quite quick to create objects, we called them AMPs for short. We could write a complex control function or cluster simple movement and update functions. There were various interrupt levels so that animations or collisions could operate asynchronously and cause behaviour changes.
I was quite fond of using numbers multiplied by 4 as indexes into tables of 32-bit values, and quite often managed to sneak a couple of bits in the bottom tw, sometimes as a count from 0 to 3, or otherwise two of those evil indicator bits. You can just mask off those bits once you've tested them and then use the resulting 4-byte aligned offset to pull a long pointer out of a table. Sometimes I'd then call the subroutine at that address. This really goes badly if you fall off the end of the table!
I'll probably tackle this more in a future blog, but I'll show a taster here. This is the AMP for a bullet fired from an enemy ship in Uridium2. It will take its position from the ship that fired it, so that's already set up.
InitBullet1 PrimeList
Prime.w _SpriteID,'B1'
Prime.w _SpritePlot,_PlotSprite
Prime.w _SpritePriority,60
EndPrimeList _SpriteEnd
AMPBullet1
StereoSound FXFire1
LVector .Die
Prime.w _SpriteID,'B1'
Prime.w _SpritePlot,_PlotSprite
Prime.w _SpritePriority,60
EndPrimeList _SpriteEnd
AMPBullet1
StereoSound FXFire1
LVector .Die
BulletSpeed 8,BaseBullet1,BaseBullet1
Collision EMissileClass,_CSizeS2X2
CVector EMReadClasses,.Die
Repeat Second*2
EBulletCart ; Expects LVector set
EndRepeat Delay
.Die
ModifyGlobal EnemyBulletCount,-1
Die
Collision EMissileClass,_CSizeS2X2
CVector EMReadClasses,.Die
Repeat Second*2
EBulletCart ; Expects LVector set
EndRepeat Delay
.Die
ModifyGlobal EnemyBulletCount,-1
Die
The first section initialises (primes) structure elements with an ID so we know what it is when we're debugging, a plot method so we send it to a hardware sprite plotter, it gets a display priority of 60 (we used 0 to 60 in 4s so we could sort the sprites by display priority into linked lists for plotting. Hardware sprites rode over everything anyway, so 60 is maximum and just a reminder, we have no control over it in this case.
EndPrimeList marks the end of initialisation and instructions will follow.
AMPBullet1 is a courtesy label only, and the first thing the bullet does is make a firing noise. The ship could do that, of course, but this way allows the same ship to fire different sounding bullets easier. We then set the LVector (L for Life) so that we can kill the bullet when it leaves the visible screen. EBulletCart is the function that moves enemy bullets by Cartesian amounts and detects screen edges. We could also move objects by polar amounts, more of that in another future blog.
BulletSpeed sets the bullet left/right speed to 8 pixels a move faster than the ship that fired it left or right, and also sets the animation base frame to the same value in this case as the bullet graphic is symmetrical.
We then set our collision class to say we're an enemy missile and are 2X2 collision blocks high and wide (each block is 4 pixels), and then if we get hit by certain classes in the same place then we die. These classes would be the player, the drone ship and explosions.
We then set up a loop of 2 seconds maximum, and call a specialist function EBulletCart every game frame (50th second on PAL Amigas) to move the enemy bullet in a Cartesian manner and check for leaving the screen. The Delay option tells the AMP processor that it's the last instruction for this game frame, and to resume at the next instruction on the next frame.
If 2 seconds elapse the bullet drops out of the Repeat loop and dies anyway, otherwise it can die by leaving the screen or by colliding with a Manta or an explosion. The enemy ship firing routine actually checks for the number of enemy bullets already in the system so we can regulate the ferocity of the game by level, and also not overload the plotting system. Mayhem Mode on the A1200 released this restriction!
That's an extra language I wasn't expecting to include, but hey.
C
C is a mid-level language that lets us get to the bits and the bytes but also has more sophisticated constructions that control the code structure. There's plenty of rope to tie yourself in knots, to mix one's metaphors.
When I found out in 1994-ish that Graftgold was going to be developing in C for PC, PlayStation and Sega Saturn I was aghast. Why would I want to swap super-fast assembler for slower C, with all its mysterious pointers and baggage? Seemingly the benefit was that we could write multi-platform code and the machines were fast enough to cope with the slower execution speed of the language.
But there was more to it than that. Sorry to start a sentence, let alone a paragraph with a conjunction, this is important! The new CPUs were of a cunning nature that could be decoding and executing multiple machine code instructions at a time. Worse than that, if it comes across a nasty big instruction followed by a simple one, it could actually execute the simple one first. Similarly if it's evaluating whether to take a branch, and the following instruction causes an exception before it's decided to take the branch then it can't back out of the exception. Also, if two consecutive instructions have a dependency then they too can execute out of sequence and mess up your expectations.
Whilst I did manage to write an 8086 assembler video decoder according to those rules, it was tough going. You have to think in two threads and interleave the instructions. I did it out of stubbornness to prove that I could.
Time to let the C compiler take the reigns! It took a while for me to get the hang of pointers, and pointers to pointers. You have to be totally confident in what you mean, getting all the asterisks and brackets in the right places.
C is at least a well-structured language. You group instructions together with curly brackets, colloquially called squirlies, and indent your code nicely with tabs so you can see the fors and the whiles. There are very few keywords in the language to learn, the complexity is elsewhere. There are libraries of code that you can easily bolt to your own to read files, or maybe manipulate text, already written for you. You've just got to figure out what it's called and where it's defined.
We had an office meeting to discuss where the curly brackets should go. Well, that caused quite a kerfuffle, I can tell you. People who had already developed their own style differently were now being asked to agree on layout and style. I bet every company has had the same discussion. Anyway, Steve wisely worked out that it didn't matter too much what standard we adopted, as long as we did adopt one so that we can easily read each other's code.
Whilst C code won't be as fast as some well-written assembler, it is much faster to develop, and avoids all of the pitfalls of modern CPUs being too cunning. You'll spend weeks trying to debug stuff that collapses so fast you won't have time to say: "How did that register get corrupted?" You should also be able to build up your own complexity satisfactorily and keep control of things.
The C compiler can do things to make your code faster that you won't even realise. Bear in mind that you can compile code in Debug mode or Release/Optimised mode. I have had bugs that only occur in the Release mode, and you can't debug it, by definition! They're stinkers, and you have to go back to the old ways of staring at the code, or putting lots of printf statements in the suspect area. Real old school.
Optimised code will give many speed advantages over Debug code. For a start; there will be less of it! The compiler can re-arrange your code in a non-destructive way to get best use of the machine code. If you call a small function it'll ditch the call and copy the code in-line to save all the parameter setup and return. It can also rearrange your variables (oo-errr!) in order to pack them in more efficiently. If you do have any buffer over-runs, this will show them up. Debug mode seems to leave plenty of space between local variables so small overruns don't get noticed.
Up to our CD32 work, our games used to take over the whole machine. There was no intent to be able to get back to the OS after a quick game. This was for a couple of reasons: firstly you wouldn't know what the machine was already up to and what resources were already being used and unavailable, and secondly we wanted to control everything!
Now we had to ask the OS for working memory. It's a really good idea to write your own memory allocator. Grab a load off the OS and manage it yourself. You can't trust the OS not to suddenly decide to do a garbage collection and rearrange your RAM in the middle of a game cycle, taking seconds to do so. I just about persuaded Dominic not to write a similar garbage collection routine in our ST memory allocator for that reason.
Our Graftgold memory allocators also allocated a few extra bytes over what was asked for, and marked some of the space in front of and after the memory given with a sequence of characters so that when the memory was freed we could just check the marks to see if the buffer had been overrun, or underrun. We also needed some extra bytes for linked lists of the memory so we could keep track of it.
I have surprised C programmers who are watching me debug when their code calls a library function that is written in assembler, and I just trace it through and look at the registers, and they go "What the Hell is that?"
C++
This was a university prank. We'll have no further discussion on the matter.
COBOL
COBOL is a high level language. By that, we mean that it is much more readable than assembler, is far divorced from machine-code, and supports more immediately useful functionality within the language itself. This version of COBOL didn't have too much mad punctuation nor any brackets of any kind other than for mathematical formulae.
My first job was as a trainee programmer in COBOL. This was the business programming language of choice. We had 3 months of training, the first two weeks of which was reading the training manuals in the training manager's office. The training manager usually managed to be on holiday while new trainees started, at least that's how it seemed. There were 2 of us starting on the same day in July 1979. I had finished sixth form, was waiting for my A-Level results and asked to start a couple of weeks early as I had no money and was bored at home. At this time I had seen a Commodore PET at school but not used a computer at all.
Having no previous experience of computers, I lapped up whatever I was given and treated it as the industry standard, which it probably was. We used COBOL (19)76 on an IBM mainframe.
COBOL is very formalised. You have to separate your variables from your code and your I/O buffers in strictly named Divisions and Sections. There's no such thing as allocated memory, in fact, there were no libraries of code provided at all, you just had the language itself. Actually, thinking back, for specialised functionality such as database access or CICS screen display calls there was additional syntax, but apart from that we wrote everything for each program. Library calls probably wouldn't make much sense, since calls to subroutines didn't pass parameters inline, and there was no such thing as a function that returned a value.
Debugging was rather crude, since we had to share 6 monitors between about 40 staff, booking 15 minute sessions on them which just about gave us time to submit a program for running. Then you went back to your desk and waited for the print-out trolley to bring you the results. If the program crashed you got a thick lump of paper with a hex listing of all the RAM you used, and the line number of your program where it fell over, plus anything your program printed. You could output your own messages, and the big tool was READY TRACE, This listed all of your routine names called during the run. For a big test that too could generate a big wad of fan-fold paper that was more useful as a doorstop.
Debugging was rather crude, since we had to share 6 monitors between about 40 staff, booking 15 minute sessions on them which just about gave us time to submit a program for running. Then you went back to your desk and waited for the print-out trolley to bring you the results. If the program crashed you got a thick lump of paper with a hex listing of all the RAM you used, and the line number of your program where it fell over, plus anything your program printed. You could output your own messages, and the big tool was READY TRACE, This listed all of your routine names called during the run. For a big test that too could generate a big wad of fan-fold paper that was more useful as a doorstop.
I do recall that for my games I needed a random number, and I didn't know how to algorithmically generate random numbers so I got one of my colleagues to write a short FORTRAN program to provide a random number and I called that. Rather OTT maybe, but it got the job done.
COBOL source code, like C, gets compiled rather than assembled. One line of COBOL might generate a few or even many lines of machine code. The source code could be compiled on different hardware and produce the same functionality with possibly very different machine code. This makes it compile-time portable. Some newer languages, like C# and VB are run-time portable.
Where I worked they kept the assembler books under lock and key in the Technical Services Office, especially if they saw me walking down the corridor towards them. For a long while they were in denial that such a thing even existed, and we only got a glimpse of assembler in the core dump print-outs we regularly produced. One day I saw a new screen appear on the terminal with, wait for it, animation. It was just an announcement screen with a line of text and a border of asterisks with different brightness values, but they were moving in real time. I asked how they did it and Simon "Ratfink" said he had implemented it in assembler. I decided I could do the same in COBOL. It didn't take many lines of code, but... it ran so much slower than the assembler one, maybe a tenth of the speed. That was it, I knew then that assembler was the business. At that point they made me promise not to write anything in assembler on their mainframe. In fairness, I didn't.
It was probably about this time that the ZX81 and the C64 came out. Yes, you could write in BASIC, but it was going to be even slower than COBOL.
BASIC
I haven't written anything commercial in BASIC, but I did write some Dragon 32 demos in it for my own amusement. BASIC is an interpreted language, rather than compiled. This means that the BASIC execution program is reading your lines of source code as it comes to them, working out what they say, and then doing them. Each statement would then cause a subroutine in the OS to be called. Dragon BASIC had a sprite plot routine built in, which I thought was decent of them because I hadn't tried to write one in assembler at this time. I got a lunar lander type of demo done where the landscape was randomly created and you could define where and how big the landing places were.
Since BASIC is interpreted, each line over and over again, it will be quite slow compared to machine code. The ZX Spectrum BASIC stored each keyword as a code, accessed as a Vulcan nerve-pinch multi-key-press, which made it faster to interpret, so you could say it was part-compiled. This also made the source code smaller meaning that you could write a larger program.
We also took a shine to the Spectrum random number algorithm. Steve had a book with the full OS listing. The routine was quite revered as not generating noticeable patterns. When we got to 16-bit we did do some experiments with random number algorithms. Dominic found one that was very even, it generated each number between 0 and 65535 once in 65536 calls, and not just by generating them in sequence! This meant it gave even distribution, should that be what you need.
We ended up converting the Spectrum random number generator from Z80 to 68000. It's quite a short routine, gives fairly even distribution, but not totally flat, and the 68000 chip managed it all easily. On the C64 I had the SID chip create all the explosion/noise sound effects on the one channel, and between effects it was playing silent white noise to itself, and there was a port that you could read a random byte from which was what the SID chip was playing. That was the shortest random number routine ever.
We ended up converting the Spectrum random number generator from Z80 to 68000. It's quite a short routine, gives fairly even distribution, but not totally flat, and the 68000 chip managed it all easily. On the C64 I had the SID chip create all the explosion/noise sound effects on the one channel, and between effects it was playing silent white noise to itself, and there was a port that you could read a random byte from which was what the SID chip was playing. That was the shortest random number routine ever.
I did look at the Spectrum routine when I wanted a C random number routine, but I decided that I would go with the in-built one. Since space isn't an issue, what I do is generate an array of, say, 1024 random numbers at the beginning of a level, and then I run 2 pointers round the array, incrementing each point with a prime number, and read out a couple of values that I hash a value from. That way I get very fast random numbers during the game when I need them quickly.
Visual BASIC
I've always said that I've not written any new lines of Visual BASIC. I used to change, correct and debug other people's code. I used to delete a lot of lines of extraneous code too.
This language is an advancement over BASIC as it includes a plethora of library code and functionality to be able to design screens and build applications quickly. It's also a lot more structured and the editors tend to help you write the code and navigate around your project. You could prototype software with it, but I wouldn't write a commercial game with it, though a simple game is well within its capabilities.
I tended to find that Visual BASIC programmers are an optimistic lot. They put a few lines of code together and usually it works. If it doesn't they write some more lines of code to the job in a different way, and they don't necessarily take out the lines that didn't work. This ultimately does produce confusion because the code that looks like it does a job actually doesn't. I don't think universities teach comprehensive debugging strategies, which you do need if you write in assembler.
John Cumming wrote us a STOS editor for creating the level maps in Rainbow Islands and Paradroid 90. That was a custom BASIC of the Visual kind for the Atari ST. It seemed to be pretty fast to run, and if memory serves you could compile the code to produce a proper executable that would run fast.
Finally
There have been other languages along the way, mostly utility interpreted languages. We used DOS command files (.bat or .cmd) to control graphics utilities that would take the artists' bitmap files and generate optimised game graphics data. I have also used XEDIT macros and EXEC2 command files to do such things as allow fast typing of COBOL programs.
I think it's reasonable to say that I have now forgotten COBOL, EXEC2, 6502, 6809, Z80, 8086 and 68000 assembler. You do just forget stuff if you don't use it. I still have my 68000 reference book though. I could probably read and understand my old program listings if I could only find them! I'm betting they're all in the same box, with my stack of Zzap!64s.
I hated C to start with, as is the frustration of throwing everything away and learning something new, but it is now my language. You need it for DirectX development anyway.
I have also learned the database language SQL, which I quite liked under the Oracle banner, but SQLServer has a lot of quirks that caused us performance issues. It'll be good when it's finished, is what I always say. SQL quite intuitive though, and is simple to learn, if difficult to master!
I managed to sidestep any number of other languages such as Java, JavaScript, C#, and Perl, I'm only capable of being a Jedi Master of one language at a time. I did like to mention my Python skills in the office occasionally though. It always made Rich laugh, anyway!
0 Yorumlar