Welcome back for another development update. At the start of the week, I shipped all of the CRUD updates that I talked about last week, and when I finished, I wasn’t sure precisely what interested me the most. I called it a day on Monday and hoped to have some inspiration before the morning.
On Tuesday, I still wasn’t particularly in a mood to code, and I wanted to do something different. I started thinking about composing music again, and I thought, “I wonder if my keyboard will fit next to my desk in the office.” Ten minutes later, my wife yells, “Careful!” as she spots me awkwardly carrying the full-size 88-key keyboard down the stairs. I plugged it all in, and within 20 minutes I had figured out how to get it talking to a few pieces of software I was trying on Linux.
Within an hour, though, I began to grow frustrated again, trying to work in the world of DAWs. I know they are incredible pieces of software, but my brain always thinks about things more programmatically. I thought back to how I had been daydreaming for ncog having a tracker system with which people could write music. I decided I wanted to start playing around with those technologies, but I also had the thought that it might be fun to be able to do some sort of live-performance tool for my Twitch stream.
I was impressed upon finding midir and having the example talking to my Casio Privia PX-150. I started modifying the example code and trying to hook up a set of grand piano samples I found online for free. My thought was that I would use rodio to load and play the correct sample for the right pitch and pressure.
Unfortunately, after spending entirely too much time attempting to reduce the latency I was hearing when pressing the keys, I opened up the audio files and inspected them. I realized I was doomed; the samples had varying amounts of dead air at the start of the clips. I didn’t want to try to align the offsets for 264 audio samples, so I decided I would switch to the built-in SineWave Source.
Within a few minutes, I was smiling as I felt the pure joy of knowing that the audio I was hearing was generated by code I wrote and was a direct result of me playing a physical instrument. It may not seem like much to you, but it brought back a sense of nostalgia and reawakened both my love of code and my love of music. I played the piano a lot that day, and I wrote a lot of code.
Muse was born this week. There are two projects that I’ve put in that repository:
Muse’s focus is on being a digital audio synthesis library. My vision for this project is divided into three parts:
Digital Synthesizer API
First and foremost, I’m building up all the building blocks for digital synthesis. Currently, the repository has an implementation of the 4 primary oscillators, an implementation of an Envelope that supports Attack, Hold, Decay, Sustain, and Release (all with customizable bezier curves).
There’s a lot of work left on this layer:
- Envelopes: I’m new to this, and I didn’t fully grasp how powerful Envelopes truly were. Most synthesizer software allows you to bind an envelope to any attribute of the sound, not just the amplitude/volume. I need to refactor how envelopes work so that they can interact with arbitrary controls within Muse.
- Filters: I want to implement a lot of common filters, including low/high pass, distortion, chorus, unison, and more. However, I also want to make sure it’s straightforward for users of Muse to implement custom filters, and to have those filters be controllable by the Envelopes and LFOs.
- Low-Frequency Oscillators: Similar to the standard oscillators, LFOs drive automation within the synthesizer without needing to write a script. The way this automation will work will likely be the same mechanism as to how Envelopes will control other Muse components.
- Sampling: I’m anticipating that there are going to be special approaches to re-pitching samples when creating a synthesized instrument from pre-recorded samples. Sampling is low on my priority list, as I’m personally most interested in synthetic audio. That being said, if you don’t need to re-pitch a sample, you can easily use Muse for a drum-kit style playback system.
Virtual Instrument API
Generating sounds is only half of the battle; the other half is playback. I’m not sure of the final design of this, but right now, I have a VirtualInstrument API that accepts a
ToneGenerator trait implementor. The signature for ToneGenerator is to define a
rodio::Source implementor that your generator returns, and the single method
generate_tone(note: Note) -> GeneratedTone<Self::Source>.
Once instantiated, VirtualInstrument provides a friendly interface to play a note, stop a note, and toggle sustain pedal. A ToneGenerator doesn’t necessarily need to return the same sound every time, so it would be entirely possible to create a VirtualInstrument with a ToneGenerator that behaved like a drum kit.
The final goal of Muse as of tod will be to provide primitives that are capable of being saved and restored from disk. I’m not sure if there are existing file formats that make sense to try to interoperate with, but I will be doing some research. Regardless, I’m hoping to be able to design a straightforward human-readable and human-writable format that can instantiate a full synthesizer setup with little Rust code necessary.
The first consumer of
amuse. My goals for
amuse are a little nebulous right now. Currently, my vision includes creating a synthesizer UI similar to how many VST UIs work, as well as creating a virtual piano interface to be able to play around even if you have no connected MIDI controllers.
amuse has all of the MIDI control directly in
main.rs. This is not how things should be. I’m thinking of creating an async wrapper around
midir by using channels to do async communication between the MIDI threads and the app’s async code. Because my initial goal is just to read MIDI signals, I’m not sure whether this deserves to be in a separate crate or merely a separate module inside of the project. If anyone is interested in having a MIDI consumption API that is fully async, I’m happy to spin it into its own crate.
Amuse is meant to be a demo of muse, but it may also provide some building blocks for the editor that will eventually be part of
What’s coming next?
I’m having a lot of fun with
amuse. So, my goals for the following week are to make more progress on
muse and maybe begin bringing kludgine into
amuse to start providing a user interface.
I’ve also started daydreaming about allowing
amuse to talk to
ncog directly for a cloud-save and collaboration. To me, collaboration on a synthesizer is exciting, and it’s just one step to creating a collaborative music composition tool for ncog. I like the idea of
amuse having standalone functionality. Still, if you choose to save to
ncog you can also let people watch as you play around or enable others to help you edit your synth configurations.
I don’t know what project I’m going to tackle on ncog this week, but my secondary goal is to make sure I move something along each week on the core ncog project.
I hope everyone has a safe weekend and a great next week. Please be responsible when out in public and wear a mask or face covering. It’s not about protecting you; it’s about protecting others from you if you’re asymptomatically infected.