Thursday 2 August 2018

Picking up the Thread

If you were at the 2018 Cambridge Dragon Meetup, then you would have experienced a number of cool things, including seeing some ultra-rare Dragon prototypes, getting your copy of 'Inside the Dragon' signed by one of the original authors, and having a random conversation concerning pickled fish with a Spanish guy who lives in Sweden.

You would also have been lucky enough to score an olde style cassette tape edition of Ciaran Anscomb's Dunjunz, and got the author to scribble on it, though I was too chicken to ask after an earlier name/face mix up, and had to get someone else to ask for me... But I have to say, everything about this game including the tape and inlay has been produced to a beautifully high standard. I believe some copies were still available at the time of writing so get 'em while you can. (Some info here)

Another great benefit of attending a show is that it renews your enthusiasm for your own projects. I'd lost focus on my game and started messing around with a couple of side projects to the extent that the trail had gone cold and I didn't really know what to do next.

So while I was intently listening to Steve Bamford's explanation of how the sprite engine and behavioural scripting works in his awesome Kiruke no Shima project, I started to remember I still had a lot of work to do to get my own sprite engine fit for purpose.

The Smell of the Beast


The first thing to do then was to get reacquainted with my own code. Now, as soon as I fired up the editor I was immediately hit by the unpleasant smell of "get it working now, tidy up later". Except without the "tidy up later" part. And I wasn't entirely sure about the "working" either. Yes, there was a sprite engine, but it didn't really have any organising principles, making it hard to work with, which in turn was stalling development.

As things stood, a sprite was a fairly passive object that relied on external code to make interesting things happen, such as moving in something other than a straight line. What I really wanted was for the sprite to be in charge of its own behaviour from initialisation to destruction. I wanted the game engine to trigger the creation of sprites and the sprites themselves should look after the details.

I made a fairly simple change to start with. Instead of passing the sprite directly to the drawing routine, each sprite now contains a vector pointing to code that is responsible for updating the sprite. This vector could simply point to the drawing routine, or it could point to a routine that implements more complex behaviour. When it's done, the update routine can choose to jump to the drawing routine, or skip the drawing entirely and move on to the next sprite in the list.

So what use is a sprite that doesn't always get drawn? Very useful as it turns out: If each chunk of update code changes the update vector to point to a new chunk of code, we can chain together a sequence of actions and suddenly we have ourselves some cooperative multi-tasking, with each sprite effectively running in its own thread.

Uh oh, story time


This is certainly not a new idea and is pretty much a form of round-robin scheduling, which has been around forever. When I started in my current employment, they were still using 6800-based control systems to run large pieces of automated test equipment. The software would have had something like ten separate threads of execution to run the various subsystems in parallel with a main loop that boiled down to a list of instructions of the form JSR [subsystem_n_vector], for subsystems 1 to n.

The thread for one of these subsystems would be a set of short routines. When one routine had completed, it would store the address of the next routine in the vector and then RTS to yield control back to the main loop and allow the next subsystem to do some work. The next time the subsystem got called, it would jump to the new routine thanks to the updated vector. If the subsystem needed to wait for a condition, such as a sensor turning on, or a measurement becoming available, then it would just leave the vector unchanged until the condition was met. The same routine would get called repeatedly until it decided it was time to change the vector. The key point is that every subroutine yields control without hogging too many CPU cycles to allow other threads to get some attention

One of the strangest jobs I recall was for a client who wanted us to write the software for their machine in BASIC. And not just any BASIC, but VBA, inside an Excel workbook, for the sole reason that they had some in-house experience of VBA. Mad as a box of frogs. So I had to write a multi-threaded application in BASIC, and I did it in a vaguely similar way to the old 6800-based control programs. Each thread had a SELECT...CASE statement that chose a subroutine to call based on the value of a state variable, and the subroutines would update the state variables to control the sequence.

It worked surprisingly well. There was the illusion of several things happening at the same time while still having a responsive user interface, though we were somewhat amused when the same client came back later and asked us to help them demonstrate that the system followed GAMP guidelines. (Good Automated Manufacturing Practice). Well, using a properly documented language that conforms to international standards would have been a good start...

Back to the plot


But I digress. I mentioned that not drawing sprites was a useful thing. When not drawing a sprite, we can use that time to perform initialisation tasks, spreading the work over several video frames if necessary. Only when everything is ready do we start drawing the sprite. This means we have greater control over the amount of work that occurs during each video frame, reducing the chance of doing too much and missing the video sync. (aka slowdown and juddering)

This in turn unlocked something I'd been wanting to do for a while but wasn't sure how best to go about it: Pre-shifting the sprite graphics on demand.

Just to recap, the sprites are 12 x 12 pixels. As we're in a four colour mode, this means 3 bytes x 12 rows = 36 bytes per sprite, plus another 36 bytes for the mask. 72 bytes total.

That's fine if we want to draw the sprite at whole byte offsets, but I need pixel precision, meaning that the sprite image needs to be shifted to suit. As this is quite a slow operation, I preshift the data, so that we have the same sprite image repeated in memory four times, each with a different degree of shift.

That takes up a lot of memory. The shifted images are now four bytes wide, so we have 4 bytes x 12 rows x 4 shifts = 192 bytes per sprite, plus the mask making 384 bytes total. This quickly adds up. For example, the end of level boss is built from four different sprites, and therefore requires 1536 bytes.

Late shift


All of this preshifting happens before the game loop starts, which is wasteful. Some sprite types don't appear on the screen together, and therefore don't need the preshifted graphics together in memory at the same time. It would be a more efficient use of memory if sprites could do their own preshifting just in time.

Now that I have this threading model, the task is much easier. I've broken down the preshifting task into chunks of work that each take no more time than it takes to draw a sprite. When certain types of sprite are triggered, they first preshift their graphics data into a common area, then perform any other necessary initialisation, then start running the normal update code, finally appearing on screen. This occurs over several video frames but is still a fast process in human terms.

As I write this, it suddenly occurs to me that the source data for the masks only requires one bit per pixel and could be represented by 18 bytes instead of the current 36 bytes. The preshift routines would just need an extra step to perform the decompression. I could maybe even run length encode the source data. Now there's a thought...

Pythonic png to packed pixel processing


Since moving from Windows to Linux, I had put development of graphics on the back burner, as I had become too reliant on Windows software, and the front burner already had some peas and noodles on the go.

In particular I was using Tile Studio to draw the sprites. This has a neat output scripting feature that generates source files that can be directly included in the game build, but working in Linux I was a bit stuck. I could draw stuff in Gimp, but lacked a way of getting the data into the game.

I ended up creating a conversion script in Python. This takes a source image in the form of a png file and converts it to the packed pixel format required by the game. It also generates masks from any colours that have out of range indices. In the images below, cyan and black would be interpreted as mask rather than image:


The conversion program can slice the image up into tiles, so we can create a whole sheet of sprites in Gimp and then have the conversion program extract the individual sprites. This works nicely in conjunction with a Gimp plugin that converts layers into sprite sheets.

I've still got a lot of learning to do in Python, but I was very impressed with the language features that make this kind of task easier than you might at first imagine. For example, I was able to perform the tile slicing without any calculation or array indexing. Instead, the image data is fed through a series of iterators that return progressively smaller pieces of image, with a simple matrix transposition (matrix_t = zip(*matrix)) in the right place to put the data in the right sequence for output. It might be useful for other projects so I'll publish it once I've made it a bit more user-friendly.


There are definitely fewer excuses available for not making progress in the game...

Sunday 3 June 2018

Buzz Kill


To make things more interesting, I was going to attempt something special for this post, and write using the third person once removed. However, when I spoke to the blog author about it, he said that Stew doesn't like the idea; that at best it would only be "vaguely amusing" (his words apparently), and then wear thin with "tedious rapidity". Oh well, I'll just resort to those twin pillars of factual reporting: Exaggeration and Lies.

Bee in a jam jar


Here's an annoying thing: Programming for joystick control in a Dragon/CoCo game. It's annoying because some of the audio hardware is shared with the joystick hardware, requiring care to get the two working together.

It's possible to demonstrate the problem in Dragon BASIC by enabling audio before reading joysticks:

   10 AUDIOON
   20 PRINT JOYSTK(0);
   30 GOTO20

That makes a loud buzzing sound. The joystick read routine needs to mess with the DAC to figure out the joystick positions, and modifies the sound source select bits in order to select the joystick axis. If audio is enabled, you can hear all this happening. (I should mention that the above code snippet only has an audible effect on Dragons and possibly older CoCos. Newer CoCos appear to be smart enough to disable audio on joystick read.)

So the main thing is to make sure audio is disabled while reading the joystick. It is also important to restore the DAC to its original value and to reselect the DAC before enabling audio again. This is the list of steps:

  • Save the current DAC value
  • Disable audio
  • Read joystick
  • Ensure DAC is selected
  • Restore DAC value
  • Enable audio

Some effort can be saved by reading the right-hand joystick x-axis last, as selecting this axis also selects the DAC. (Assuming you want the DAC selected of course. If you were instead using cartridge audio, then it would be more efficient to read the left-hand x-axis last)

It is helpful to make the joystick read routine as fast as possible, to minimise the time that the audio is off. My routine is relatively fast as I'm not interested in converting a precise joystick position, just left-centre-right on the x-axis and up-centre-down on the y-axis.

Without going into too much detail, this is done with four writes to the DAC at $FF20 and four reads from the comparator bit in $FF00. One pair of writes for the y-axis and the other pair for the x-axis. After each DAC write the comparator bit is shifted into the B register, resulting in a number representing the joystick position that can be further processed when the audio has been turned back on. I use a lookup table to convert the value into an angle, but it could just as easily be a jump table.

I've had that in place for quite some time and had been happy with it until fairly recently when I recommended the same method to someone else and it didn't work so well: They were getting a buzzing sound.

Umptioned again


In my mind I imagined a capacitor somewhere in the audio circuit that would hold the voltage while the audio was disabled. The assumption was that if the audio was off for a short enough time, then the voltage wouldn't change enough to make a noticeable click when the audio was enabled again. However, after a fair bit of digging, taking measurements and running circuit simulations it turns out that is only part of the story.

The first thing I discovered was that connecting different things into the audio output each gave very different results with my joystick code:

  1. Audio via the RF modulator gave the cleanest sound
  2. A composite monitor gave a buzz that decayed to silence in about a second on the tail end of each sound effect. Strangely, I hadn't noticed it before until I started listening for it and now it seems obvious.
  3. A set of powered speakers gave a continuous buzzing sound

It's starting to look like there's a complicated interplay of factors, but it's surprisingly simple to understand once you learn what is happening. I hinted that capacitors are only part of the story. What was missing is the fact that a capacitor is connected to a circuit via some resistance and this introduces a property called a 'time constant'. This defines how long the circuit takes to reach a steady state after something changes, for example after a new value is written to the DAC.

Use dumb luck. It's super effective.


In the case of the RF modulator, there is a combination of capacitance and resistance giving a time constant such that it takes about 5 milliseconds for the circuit to settle down after the DAC is updated. If the audio is disabled during this time, then you get a click because you're interrupting the current flowing in the circuit and that causes a jump in voltage. The shorter the gap between DAC update and audio off, the louder the click.

It just so happens that my joystick routine was so far away from the last DAC update that I couldn't detect any disturbance. Dumb luck in other words. When I get round to improving the sound engine, the DAC update will be getting closer and closer to the joystick read, so I may start hearing interference.

In the case of the composite monitor, this has a much larger audio input capacitance. This gives a time constant that means it takes a whole second for the circuit to stabilise. After each sound effect, the circuit is out of balance for approximately 25 game loops, each with a joystick read producing a click slightly quieter than the previous one.

The powered speakers are a bit different. There is no visible input capacitance on these. Instead there is a relatively low resistance to ground through the volume control. What this means is that the voltage on the audio output drops to a low level within microseconds of disabling audio. This results in a click on every joystick read giving a continuous buzzing.

Taking all scenarios into consideration, it would seem that the best defence against joystick buzzing is to leave audio disabled after reading the joystick, and to only enable sound at the start of a sound effect. But even then, clicking may still be audible, particularly for quiet or fading sound effects. If possible leave a few milliseconds after a DAC update before reading joysticks and that should at least keep things quiet as far as audio via the RF modulator is concerned.

Of course, this all results from investigation on a Dragon. Things may be a little different for the CoCo as the various models have custom ICs which may not be the same as the heap of discrete parts in the Dragon. Your kilometrage may vary.




And now for something completely similar



Previously I discussed my initial attempts at producing music for the SN76489, as used in the CoCo Game Master Cartridge. In particular I speculated about modulating the attenuation registers as a way of superimposing lower frequency tones that are below the sound chip's normal range.

To test that idea I would need some real hardware. I ended up making a small sound card that could be piggy-backed onto another cartridge and plugged into my Dragon. In the process I found out just how bad my eyesight had become and wished that I had the luxury of magnifiers and bright lights that I'm used to using at work. (And that I hadn't stupidly left my reading glasses at work)

Fortunately, I still ended up with a working card without too much trouble. The SN76489 audio output appears to be highly prone to unwanted oscillation, making it hard to see the audio waveform on an oscilloscope, so I added a high frequency bypass to ground on the audio output pin. The bypass consists of 100nF + 10R in series, values suggested on the datasheet for a similar part, the SN76496.

I don't always make my own cartridges, but when I do I attach them to other cartridges with tiny bits of wire

Some quick tests proved that the modulation idea worked in principle, so I extended the music player to modulate two of the SN76489 tone channels with software generated waveforms. These have a programmable pitch offset from the hardware tone and also have variable duty cycle. This results in the following sound:



It's not quite the same kind of sound as the variable duty pulse waves that I was playing with before, largely because the phase relationship between the soft and hard waveforms jumps every video frame while the next sound fragment is set up, but it does add a great edge to the sound. Definitely a keeper :)

Friday 6 April 2018

Encore

Ooh shiny

Well if there was any kind of schedule, I'm definitely behind it. Interesting things keep appearing and distracting me. Like that thing over there. Back in a minute...

The latest shiny is that development snapshots of xroar now have Game Master Cartridge support. This is exciting stuff: I've been thinking about the GMC for some time as a vehicle for a 'deluxe' version, and this was just the excuse I needed to have a play. The banked ROM would allow for a greater variety of sprites and a larger background map, plus the SN76489 sound chip could give more options for in-game sound. More on that in a bit.

Lost in music


I felt I was on a roll with the music, and it seems to be a hot topic at the moment, so I stuck with it and have now recorded my changes to CyD as a fork on GitHub.

I'd previously written about how I tweaked CyD to play variable duty cycle square waves and added simple support for drum samples. Since then I've improved the duty cycle control a little and added a couple more example tunes.

I originally had a bit of code like this to control the pulse duty over time:

        lda duty
rate    equ *+1
        adda #0
cond    bmi skip
        sta duty
skip

The bmi instruction prevents the duty cycle being negative, so we could start with a low value and a positive rate to give a waveform that changes from narrow pulses to square, or go the other way by starting with a large positive value and have a negative rate.

As an added bonus, the bmi instruction can be modified into a brn to make the duty keep wrapping round and cycle endlessly. That sounds really good, but I wondered how much better it would be if I could have the duty ping-pong between two arbitrary limits.

The difficulty with that is there are then two conditions to check for, and which one we check depends on which direction the rate is heading. For quite a while I didn't bother because it sounded like too much complexity for the gain. I didn't want to add too much code because the time taken reduces the quality of the sound.

Then I thought of a way. Not a pretty way, but a way nonetheless. The following only adds one immediate instruction to the critical path. The other instructions only come into play briefly when one of the limits is reached:

        lda   duty
rate    equ   *+1
        adda  #0
cond1   equ   *+1
        cmpa  #$e0
        bls   store
        neg   rate
cond2   equ   *+1
        ldd   #$2024  ; bhs=$24 bls=$23
        ldx   cond1
        stx   cond2
        std   cond1
        bra   skip
store   sta   duty
skip

This exploits the fact that the limit value and the branch opcode are sitting right next to each other in memory and can both be written at the same time with a 16-bit store. Each time the limit is hit, the limit value and branch condition are updated and the sign of the rate is reversed. Ugly, but worth it: This gives much more variety to the sound.


Bend it like Beckham


To help with manually entered music I've added a macro to set up portamento effects. This is where one note transitions into another by smoothly sliding the frequency between two values and is a familiar element in many chiptunes. The way CyD does it is by adding a constant value to the frequency value each 'tick', or 1/50th of a second. So to slide from one note to another we need to calculate how much to add per tick by dividing the desired frequency change by the number of ticks over which it should happen. This could be done by the assembler if the frequency values are defined in the source code.

Among the CyD source files there was a nice Perl script that calculated the note frequency lookup table. I say 'was' because I've butchered this script most 'orridly (Perl just doesn't understand me) to enable the assembly source to calculate its own timings based on how the channels have been configured.

This script now also outputs the note frequency lookup values as symbols which in turn allowed me to add the helper macro to set up a portamento based on the start and end notes and the time interval. This makes it really easy to add guitar-like bends.

After much experimentation I've discovered that a snare drum sample can be lengthened and enhanced by switching to white noise near the end of the sample. The white noise then just continues until cut off by another sound. This sounds a bit odd on its own, but when mixed with other channels, gives the impression of a longer sample with reverb. I think it sounds pretty effective anyway:



All your bass are belong to us


As I've invested a lot of time in CyD and am familiar with the way it works, I hacked together a GMC version to get a feel for how things might sound. What quickly became apparent is that the lowest notes are not particularly low. It goes down to about a B2, which is nowhere near the lowest note on a guitar, let alone a bass.

One way to get lower notes is to use a lower clock frequency than the standard 4MHz. In fact Time Pilot '84, the arcade game that my game is at least trying to be loosely based on, runs three similar chips at half the NTSC subcarrier frequency, or approximately 1.79MHz. That would get you a mains-hum-like A1, the second lowest string on a standard bass, and a bit closer to what we like.

Another way is to make use of the 'periodic noise' capability. This is a function of the noise channel, but isn't really noise, just a low duty square wave with a frequency that is (I think) 1/15 of the usual frequency. It has a thin, buzzy kind of sound.

Yet another route to lower notes is to modulate the attenuation registers to superimpose lower frequencies onto the normal output. This requires CPU time so wouldn't be very suitable for in-game music, but might sound interesting enough to be good for title music, especially if the duty cycle varies over time. I'm very tempted to give it a go.


Play it again, GMC


I reworked my example tune above to play more nicely on the SN76489. I had to transpose it up 4 semitones and throw away the really low notes, which makes it sound completely different. I also ended up mashing in some other ideas just to hear what it would sound like and this is the unholy result:



The echo effect at the start is easily achieved by playing the same thing on two channels with a moderate delay between the two. In this case the delay is something like 300ms. When the notes bend it really comes alive. I've also snuck in some cheeky vibrato as well to spice things up a bit. This is just a rapid series of bends above and below the start note to make the frequency wobble around in a pleasing way.

The drum sounds are simply bursts of noise with the volume and frequency changing over time. The kick drum has a really rapid drop in frequency to make it sound like a thud while the snare has a short period of rapid frequency drop followed by a longer period with more gentle change. It works quite well but it leaves me wondering how good a result could be achieved with a more sophisticated shape to the frequency envelope.

I've put my GMC version of CyD on github, just in case anyone wants to have a play...

Saturday 27 January 2018

Music


I decided to take a long break from coding the game mechanics and instead have a closer look at generating some music, or "mooozik", as they say round these parts.

Because my living involves a certain amount of programming I don't need much in the way of persuasion to run as far away from coding as I can. On the other hand, playing around with music is much more appealing to me and I've been having a lot of fun over the last few weeks getting the Dragon to produce some audio that is - dare I say it? - musical.

The fruit of someone else's labour


I would like to have as much music in the game as possible: Theme music on the title screen, story music, get ready music, game over music, high score music etc. Whatever I can fit in to fill the silence. Maybe even in-game music if a sound chip is present. I'm going to aim high and see how far I get.

The biggest problem seemed to be the lack of a suitable music engine. When I started work on the game I wasn't aware of anything off the shelf that I could use and it looked like I was going to have to write something from scratch.

I learned quite a bit about music playback while working on a musical tape loader with Simon Jonassen and started to get a feel for the kind of things that might be possible. Then, as luck would have it, Ciaran Anscomb brought his marvellous CyD project to my attention.

To fill in a bit of background this is a software synth based on the CoCoSID project by RĂ©mi Veilleux, which as the title suggests, is designed to play C64 SID tunes on the CoCo. It works by mixing three channels of synthesised waveforms and updating the synthesis parameters 50 times per second. The waveforms are synthesised from a selection of reference waveforms stored in memory. The playback parameters are determined by an offline conversion of a SID register dump obtained from a C64 music player into a pattern-based format that CoCoSID can read in real time. The results are really impressive.

What Ciaran has done with CyD is to produce a rewritten, optimised version with a new music processor allowing you to enter music data in a more memory-efficient programmatic form. It has a variety of features such as envelopes, portamento, subroutine calls, transposition and arpeggios - i.e. he's made it more like a music driver that back-in-the-day purveyors of fine tunery such as Martin Galway or Rob Hubbard might have used. "That looks handy," I remember thinking to myself, "I'm having that."

Some of you may have heard the title music from the 32K demo version of the game. This is being played using an unmodified version of CyD. It has a very distinctive and dynamic sound, far removed from the sounds that most of us Dragon owners are used to hearing. However, after the novelty wore off, I wondered if I could bend things a bit to add even more complexity to the sound.

Pushing the envelope


Generating a tone in software will very often be based around a phase accumulator implemented with something like the following piece of code:

phase equ  *+1
      ldd  #0
freq  equ  *+1
      addd #0
      std  <phase

That generates a sawtooth waveform in the A register. Self-modifying code is used for performance, and direct page addressing helps too. Synthesising an arbitrary waveform is just an indexed instruction away:

      lda a,y  ; y points to a 256 byte waveform

The same lookup could be done by self-modifying the lower address byte of an extended address instruction with the upper byte pointing to the current waveform. In fact CyD does a bit of both to mix three channels together in an optimal way.

Envelopes are simulated by using separate versions of the same waveform differing only in volume level. Having just two or three volume levels works surprisingly well.

So what can we do that's different? Funnily enough, Ciaran came up with something new for his Dunjunz game, this time generating square waves directly. Instead of looking up a reference waveform stored in memory it converts the sawtooth in the A register into a square wave using something like this:

    lsla       ; put msb of A into B
    rorb       ;
    sex        ; extend sign of B into A
    anda #vol  ; set volume

If A is greater than or equal to 128, it becomes set to #vol, else it gets set to zero. Nice. It's the sort of branchless conditional construct that you might see a compiler generate to avoid stalling a pipelined cpu and here it is making itself useful on the 6809.

That got me thinking about how cool it would be to be able to vary the duty cycle of the square wave i.e. generate rectangular pulses of varying width. That would make for some very interesting noises. It turns out that duty can be added for free by replacing the lsla instruction with an adda:

    adda #duty ; use duty to generate carry
    rorb       ; ...and set sign of B
    sex        ; extend sign of B into A
    anda #vol  ; set volume

Varying the duty value from 0 to 255 will generate rectangular waveforms with corresponding duty ratio of 0 to nearly 100%. A duty value of 128 will generate a square wave as before.

To try it out I added a command to CyD to set the duty. To demonstrate what it sounds like, first here's a simple bass riff played using a square wave:



And now the same notes but with different duty cycles:



Things get more interesting when the duty is varied while a note is playing so I added a tiny bit of code to do just that:



Let's go mad and play two voices an octave apart with different duty settings:



Now you're talking. If you enjoy upsetting C64 fans, try calling this sound 'SID sound'. That riles them up something chronic.

Kicks like a concrete donkey


CyD gives us three channels to play with, which immediately makes me think of great power trios such as Clapton/Bruce/Baker, Lifeson/Lee/Peart and, err, Rod/Jane/Freddy. I'm thinking one channel for lead, another for bass and another for drums.

Drums can be synthesised from combinations of noise, square, triangle waveforms etc., and messing with the frequency. In fact I did this for the title music in the 32K demo, but the kick drum, while adding cool accents to the bass, did sound a bit feeble to me.

A kick drum is pretty much a noisy click followed immediately by a thud. The frequency is fairly low and drops with time with the volume swelling in the middle. The whole thing might last 50 - 300ms. Bearing this in mind I drew a 256 sample waveform freehand in Milky Tracker:

Crude, but effective

I boosted the volume somewhat, resulting in a bit of clipping, which is helpful here and makes the kick louder. This was then exported to an 8 bit wav file and converted using sox into a 256 byte raw binary file that could be included into the assembly source. To save you the pain of trying to decipher the sox manual, this is the command I used:

    sox -D in.wav out.raw vol 0.33 dcshift -0.66

The -D option disables dithering. When dithering is enabled, it can make the waveform one bit too loud, potentially resulting in overflows and nasty clicking during playback.

There are still some hoops to jump through to get CyD to play the sample properly. First we need to reset the channel phase counter to the beginning of the waveform when we trigger the sample, otherwise it will play from some random point. This required a new sample playback command. We next need to make an envelope that is about as long as the sample, in this case 4/50ths of a second, and ending with a silent waveform to stop playback. Then the frequency needs to be fine tuned so that the sample ends when the envelope ends. Otherwise the sample will either be cut short, or it will start again from the beginning. I manually adjusted the first note in the frequency table to suit the sample, though I think a frequency set command will be required to allow a variety of samples to be played.

This sounds like the sort of fiddly exercise that should be automated in software, so that's yet another thing to add to the ever-growing todo list. Anyway, this is what a 256 byte kick drum sample sounds like on a Dragon:



Music puns are not my forte


To actually compose music I play around with ideas in Milky Tracker and then hand convert to CyD. (Yeah, yeah, I know I should write a conversion script.) I've also been playing with MikroWave on my phone. It's a bit of a toy but you can throw together new loops and rhythms really quickly.

Time for a quick demo tune. Now it's fair to warn you that I quite like droning repetitive music. This allows me to listen to Moby without having to gnaw off my own arm to escape, though the urge is never far away. Like all the other recordings on this page this was captured from xroar. Roll credits...