The AMY Additive Piano Voice

The piano is a versatile and mature instrument. One reason for its popularity is its wide timbral range; in fact, its main innovation was the ability to play notes both loud and soft, hence "Pianoforte" (its full original name).

To make AMY into a truly general-purpose music synthesizer, we wanted to add a good piano voice. But it's not simple to synthesize a good piano voice. This page explains some of what makes piano synthesis challenging, and how we addressed it. Our approach was to use additive synthesis, which nicely fills out the demonstration of the primary synthesis techniques implemented in AMY, after subtractive (the Juno-6 patches) and FM (the DX7 patches).

The sound of a piano

Here's an example of 5 notes played on a real piano:

uiowa

This clip starts with a D4 note played at three loudnesses - pp, mf, and ff. These are followed by a D2 (two octaves lower), and a D6 (two octaves higher). (I made this example by adding together isolated recordings of single piano notes from the University of Iowa Electronic Music Studios Instrument Samples, which are the basis of the AMY piano. I combined them with the code in make_piano_examples.ipynb.)

Some things to notice:

Synthesizing piano sounds

Electronic musical instruments have always taken inspiration from their acoustic forbears, and most electronic keyboards will attempt to simulate a piano.

Let's set up a function in AMY (which runs in this web page in Python, you can edit the code!) that plays the recorded pattern above on some AMY presets:

def piano_example(base_note=72, volume=5, send_command=amy.send, init_command=lambda: None): amy.send(time=0, volume=volume) init_command() send_command(time=50, voices='0', note=base_note, vel=0.05) send_command(time=435, voices='0', note=base_note, vel=0) send_command(time=450, voices='0', note=base_note, vel=0.63) send_command(time=835, voices='0', note=base_note, vel=0) send_command(time=850, voices='0', note=base_note, vel=1.0) send_command(time=1485, voices='0', note=base_note, vel=0) send_command(time=1500, voices='1', note=base_note - 24, vel=0.6) send_command(time=2100, voices='2', note=base_note + 24, vel=1.0) send_command(time=3000, voices='1', note=base_note - 24, vel=0) send_command(time=3000, voices='2', note=base_note + 24, vel=0)

The Roland Juno-60 included a preset called Piano, which we can now hear with

piano_example(base_note=74, volume=10, init_command=lambda: amy.send(time=0, voices='0,1,2', load_patch=7))

(Try changing the base_note or volume or the patch number and running again)

This synthetic piano gets the stable harmonic structure and steady decay of each note, but there's no change in timbre with the different note velocities; every harmonic gets louder by the same factor. (In fact, the Juno-60 was not velocity sensitive, but its usual practice to scale the whole note in proportion to velocity). There's no complexity to the harmonic decays, they are uniformly monotonic. And the overall note decay time doesn't vary with the pitch.

The DX7 similarly provides a number of presets claiming to be pianos, including 135-137 (in our numbering which starts at 128):

piano_example(base_note=50, volume=25, init_command=lambda: amy.send(time=0, voices='0,1,2', load_patch=137))

This sounds more like a DX7 than any acoustic instrument. It does manage to bring some modulation on top of the decays of the harmonics (visible as gaps in the horizontal lines) but is not very convincing.

Additive synthesis

The horizontal lines in the spectrogram are simply sinusoids at fixed frequencies with slowly-varying amplitudes; the essence of Fourier analysis is that any periodic signal can be built up by summing sinusoids at integer multiples of the fundamental frequency (the lowest sinusoid). We can use this directly to synthesize musical sounds, so-called "additive synthesis", and AMY was originally designed for this very purpose. We use one oscillator for each harmonic, and set up its amplitude envelope to be a copy of the corresponding harmonic in a real piano signal. (I wrote code to analyze the UIowa piano sounds into harmonic envelopes in piano_heterodyne.ipynb.)

Let's start by loading in the analysis. We're using ulab, which is a numpy-like written for Micropython (what this web page is using.)

"""Piano notes generated on amy/tulip.""" # Uses the partials amplitude breakpoints and residual written by piano_heterodyne.ipynb. from ulab import numpy as np # Read in the params file written by piano_heterodyne.ipynb # Contents: # sample_times_ms - single vector of fixed log-spaced envelope sample times (in int16 integer ms). # notes - the MIDI numbers corresponding to each note described. # velocities - The (MIDI) strike velocities available for each note, the same for all notes. # num_harmonics - Array of (num_notes * num_velocities) counts of how many harmonics are defined for each note+vel combination. # harmonics_freq - Vector of (total_num_harmonics) int16s giving freq for each harmonic in "MIDI cents" i.e. 6900 = 440 Hz. # harmonics_mags - Array of (total_num_harmonics, num_sample_times) uint8s giving envelope samples for each harmonic. In dB, between 0 and 100. from piano_params import notes_params NOTES = np.array(notes_params['notes'], dtype=np.int8) VELOCITIES = np.array(notes_params['velocities'], dtype=np.int8) NUM_HARMONICS = np.array(notes_params['num_harmonics'], dtype=np.int16) assert len(NUM_HARMONICS) == len(NOTES) * len(VELOCITIES) NUM_MAGS = len(notes_params['harmonics_mags'][0]) # Add in a derived diff-times and start-harmonic fields # Reintroduce the initial zero-time... SAMPLE_TIMES = np.array([0] + notes_params['sample_times_ms']) #.. so we can neatly calculate the time-deltas needed for BP strings. DIFF_TIMES = SAMPLE_TIMES[1:] - SAMPLE_TIMES[:-1] # Lookup to find first harmonic for nth note. START_HARMONIC = np.zeros(len(NUM_HARMONICS), dtype=np.int16) for i in range(len(NUM_HARMONICS)): # No cumsum in ulab.numpy START_HARMONIC[i] = np.sum(NUM_HARMONICS[:i]) # We build a single array for all the harmonics with the frequency as the # first column, followed by the envelope magnitudes. Then, we can pull # out the entire description for a given note/velocity pair simply by # pulling out NUM_HARMONICS[harmonic_index] rows starting at # START_HARMONIC[harmonic_index] FREQ_MAGS = np.zeros((np.sum(NUM_HARMONICS), 1 + NUM_MAGS), dtype=np.int16) FREQ_MAGS[:, 0] = np.array(notes_params['harmonics_freq'], dtype=np.int16) FREQ_MAGS[:, 1:] = np.array(notes_params['harmonics_mags'], dtype=np.int16)

Now, let's set up some code to return the interpolated harmonics from a MIDI note and velocity for the piano.

def harms_params_from_note_index_vel_index(note_index, vel_index): """Retrieve a (log-domain) harms_params list for a given note/vel index pair.""" # A harmonic is represented as a [freq_cents, mag1_db, mag2_db, .. mag20_db] row. # A note is represented as NUM_HARMONICS (usually 20) rows. note_vel_index = note_index * len(VELOCITIES) + vel_index num_harmonics = NUM_HARMONICS[note_vel_index] start_harmonic = START_HARMONIC[note_vel_index] harms_params = FREQ_MAGS[start_harmonic : start_harmonic + num_harmonics, :] return harms_params def interp_harms_params(hp0, hp1, alpha): """Return harm_param list that is alpha of the way to hp1 from hp0.""" # hp_ is [[freq_h1, mag1, mag2, ...], [freq_h2, mag1, mag2, ..], ...] num_harmonics = min(hp0.shape[0], hp1.shape[0]) # Assume the units are log-scale, so linear interpolation is good. return hp0[:num_harmonics] + alpha * (hp1[:num_harmonics] - hp0[:num_harmonics]) def cents_to_hz(cents): """Convert 'Midi cents' frequency to Hz. 6900 cents -> 440 Hz""" return 440 * (2 ** ((cents - 6900) / 1200.0)) def db_to_lin(d): """Convert the db-scale magnitudes to linear. 0 dB -> 0.00001, so 100 dB -> 1.0.""" # Clip anything below 0.001 to zero. return np.maximum(0, 10.0 ** ((d - 100) / 20.0) - 0.001) def harms_params_for_note_vel(note, vel): """Convert midi note and velocity into an interpolated harms_params list of harmonic specifications.""" note = np.clip(note, NOTES[0], NOTES[-1]) vel = np.clip(vel, VELOCITIES[0], VELOCITIES[-1]) note_index = -1 + np.sum(NOTES[:-1] <= note) # at most the last-but-one value. strike_index = -1 + np.sum(VELOCITIES[:-1] <= vel) lower_note = NOTES[note_index] upper_note = NOTES[note_index + 1] note_alpha = (note - lower_note) / (upper_note - lower_note) lower_strike = VELOCITIES[strike_index] upper_strike = VELOCITIES[strike_index + 1] strike_alpha = (vel - lower_strike) / (upper_strike - lower_strike) # We interpolate to describe a note at both strike indices, # then interpolate those to get the strike. harms_params = interp_harms_params( interp_harms_params( harms_params_from_note_index_vel_index(note_index, strike_index), harms_params_from_note_index_vel_index(note_index + 1, strike_index), note_alpha, ), interp_harms_params( harms_params_from_note_index_vel_index(note_index, strike_index + 1), harms_params_from_note_index_vel_index(note_index + 1, strike_index + 1), note_alpha, ), strike_alpha, ) return harms_params

And then some AMY helper code to send out these parameters to the right voices. We're using the BYO_PARTIALS type in AMY, which allows you set up your own partial synthesis breakpoints using envelopes.

def init_piano_voice(num_partials, base_osc=0, **kwargs): """One-time initialization of the unchanging parts of the partials voices.""" amy.send(osc=base_osc, wave=amy.BYO_PARTIALS, num_partials=num_partials, amp={'eg0': 0}, **kwargs) for partial in range(1, num_partials + 1): bp_string = '0,0,' + ','.join("%d,0" % t for t in DIFF_TIMES) # We append a release segment to die away to silence over 200ms on note-off. bp_string += ',200,0' amy.send(osc=base_osc + partial, wave=amy.PARTIAL, bp0=bp_string, eg0_type=amy.ENVELOPE_TRUE_EXPONENTIAL, **kwargs) def setup_piano_voice(harms_params, base_osc=0, **kwargs): """Configure a set of PARTIALs oscs to play a particular note and velocity.""" num_partials = len(harms_params) amy.send(osc=base_osc, wave=amy.BYO_PARTIALS, num_partials=num_partials, **kwargs) for i in range(num_partials): f0_hz = cents_to_hz(harms_params[i, 0]) env_vals = db_to_lin(harms_params[i, 1:]) # Omit the time-deltas from the list to save space. The osc will keep the ones we set up in init_piano_voice. bp_string = ',,' + ','.join(",%.3f" % val for val in env_vals) # Add final release. bp_string += ',200,0' amy.send(osc=base_osc + 1 + i, freq=f0_hz, bp0=bp_string, **kwargs)

We can now set up a PARTIALS patch #1024 in AMY (using store_patch) with independent per-harmonic envelopes. We set up the piano once, pre-configured to C4.mf for each note and scaled during playback. We're setting 20 breakpoints independently for 20 harmonics with data read from the piano-params.json file written by piano_heterodyne.ipynb.

patch_string = 'v0w10Zv%dw%dZ' % (NUM_HARMONICS[0] + 1, amy.PARTIAL) # The lowest note provides an upper-bound on the number of partials we need to allocate. def init_piano_voices(num_partials=NUM_HARMONICS[0]): amy.send(store_patch='1024,' + patch_string) amy.send(voices='0,1,2', load_patch=1024) init_piano_voice(num_partials, voices='0,1,2') # piano_note_on (below) overwrites these settings before each note, # but pre-configure each note to C4.mf so we can experiment. setup_piano_voice(harms_params_for_note_vel(note=60, vel=80), voices='0,1,2')

And play those:

piano_example(base_note=62, volume=5, init_command=init_piano_voices)

fixed

The mf D4 note now sounds quite realistic, because it's a reasonably accurate reproduction of the original recording. However, we're still simply scaling its overall magnitude in to get different veloicities. And when we change the pitch, we just squeeze or stretch the harmonics (and hence the notes' spectral envelopes), which is not at all realistic sounding.

Instead, we need to interpolate the real piano recordings at different notes and strikes to get the actual envelopes we synthesize. The UIowa samples provide recordings of all 88 notes on the piano they sampled, at three strike strengths. We could include harmonic data for all of them, but adjacent notes are quite similar, so instead we encode 3 notes per octave (C, E, and Ab) and interpolate the 3 semitones between each adjacent pair. (We currently store only 7 octaves, C1 to Ab7, so 21 pitches).

For velocity, we have no choice but to interpolate, since the three recorded strikes do not provide enough expressivity for performance. We analyze all three (for each pitch stored, so 63 notes total).

Playing a note, then, involves interpolating between four of the stored harmonic envelope sets (recall that each set consists of 20 breakpoints for up to 20 harmonics): To synthesize the D4 at, say, velocity 90, we use C4 at mf and ff (which I interpreted as velocities 80 and 120) as well as E4 mf and ff. By doing this interpolation separately for every (note, velocity) event, we get a much richer range of tones. In this case we recompute the piano voice on each note on, given the note number and velocity:

def piano_note_on(note=60, vel=1, **kwargs): if vel == 0: # Note off. amy.send(vel=0, **kwargs) else: setup_piano_voice(harms_params_for_note_vel(note, round(vel * 127)), **kwargs) # We already configured the pitches and magnitudes in setup, so # the note and vel of the note-on are always the same. amy.send(note=60, vel=1, **kwargs)

Let's hear this much nicer version:

piano_example(base_note=62, init_command=init_piano_voices, send_command=piano_note_on)

interpolated

This recovers both the spectral complexity of the original piano notes, and the variation of the spectrum both across the keyboard range and across strike intensities. The spectrogram of the original recordings is repeated below for comparison.

uiowa

While there are plenty of details that have not been exactly preserved (most notably the noisy "clunk" visible around each onset of the recordings, but also the cutoff at 20 harmonics, which loses a lot of high-frequency for the low note), this synthesis just feels much, much more realistic and "acoustic" than any of the previous syntheses.

Because we are representing each note as an explicit set of harmonics, we can do things that would be very hard with, e.g., a sample. By messing with the status of the PARTIALs oscs, we can listen to each partial individually:

# Configure the default voice for C4.ff init_piano_voices() setup_piano_voice(harms_params_for_note_vel(64, 120), time=0, voices='0') # Convert each PARTIAL osc into regular SINE oscs and play them in order. for partial in range(1, 20): time = partial * 400 amy.send(time=time, voices='0', osc=partial, wave=amy.SINE, note=60, vel=1) amy.send(time=time + 390, voices='0', osc=partial, vel=0)

By restricting the number of partials the control osc things it is driving, we can listen to syntheses with different numbers of partials:

# Re-initialize the voice (after flipping the oscs into SINEs). init_piano_voices() setup_piano_voice(harms_params_for_note_vel(64, 120), time=0, voices='0') # Add partials one by one for num_partials in range(1, 20): time = num_partials * 400 amy.send(time=time, voices='0', wave=amy.BYO_PARTIALS, num_partials=num_partials) amy.send(time=time + 15, voices='0', note=60, vel=1) amy.send(time=8500, voices='0', vel=0)

We can also change the tuning of each harmonic away from what was provided by the analysis. For instance, we can retune them to have different inharmonicities (see below for a discussion of piano inharmonicity):

def retune_partials(f0=263.3, beta=0.0003, num_partials=20, **kwargs): for partial in range(1, num_partials + 1): amy.send(osc=partial, freq=partial * f0 * np.sqrt(1 + beta * partial * partial), **kwargs) init_piano_voices() # Try it for a low note setup_piano_voice(harms_params_for_note_vel(32, 120), voices='0', time=0) # Tuning from analysis amy.send(time=0, voices='0', note=60, vel=1) amy.send(time=2000, voices='0', vel=0) # Unstretched tuning retune_partials(f0=cents_to_hz(3200), beta=0, voices='0', time=2200) amy.send(time=2210, voices='0', note=60, vel=1) amy.send(time=4200, voices='0', vel=0) # Parametric stretching tuning, exaggerated retune_partials(f0=cents_to_hz(3200), beta=0.0008, voices='0', time=4400) amy.send(time=4410, voices='0', note=60, vel=1) amy.send(time=6400, voices='0', vel=0)

We can also do interesting things with interpolation. For instance, we can interpolate pitches more finely than the standard semitones:

init_piano_voices() hps_c4 = harms_params_for_note_vel(60, 120) hps_c5 = harms_params_for_note_vel(72, 120) for quarter_tone in range(24): time = quarter_tone * 400 hps_interpolated = interp_harms_params(hps_c4, hps_c5, quarter_tone / 24) setup_piano_voice(hps_interpolated, time=time, voices='0') amy.send(time=time + 10, voices='0', note=60, vel=1) amy.send(time=time + 500, voices='0', vel=0)

By using interpolation factors outside the range (0, 1) we can even extrapolate the strike strength:

init_piano_voices() hps_pp = harms_params_for_note_vel(60, 40) hps_ff = harms_params_for_note_vel(60, 120) for strike in range(5): time = strike * 400 strike_alpha = 0.4 * (strike - 1) # strike_alpha ranges from -0.4 to +1.2 hps_interpolated = interp_harms_params(hps_pp, hps_ff, strike_alpha) setup_piano_voice(hps_interpolated, time=time, voices='0') amy.send(time=time + 10, voices='0', note=60, vel=1) amy.send(time=time + 500, voices='0', vel=0)

If you're curious about exactly how we extracted the harmonic frequencies and envelopes, the rest of this page provides an overview of the process implemented in piano_heterodyne.ipynb.

If you have a Tulip or want to try Tulip on the web, you can play this piano synthesis live with a MIDI device. Use the Voices app to switch to the dpwe piano (256) patch, or type midi.config.add_synth(channel=1, patch_number=256, num_voices=4).

Extracting harmonic envelopes

How exactly to we capture the envelopes for each harmonic in our additive model? In principle, this is simply a matter of dissecting a spectrogram of the individual note (like the images above) to measure the intensity of each individual horizontal line. In practice, however, I wanted something with finer resolution in time and frequency to obtain very accurate parameters. I used heterodyne analysis, which I'll now explain.

The Fourier series expresses a periodic waveform as the sum of sinusoids at multiples of the fundamental frequency ("harmonics"), with the phases as amplitudes of each harmonic determining the resulting waveform. It's mathematically convenient to describe these sinusoids with complex exponentials, essentially sinusoids with two-dimensional values at each moment, where the real axis part is our regular sine, and the imaginary (2nd) axis part is the cosine (sine phase-shifted by 90 degrees):

1*t6wVEZv6CkhACEyY2pFe2A

Each sinusoid is constructed as the sum of a pair of complex exponentials with positive and negative frequencies; the imaginary parts cancel out leaving the real sinusoid. Thus, the full Fourier spectrum of a real signal has mirror-image positive and negative parts (although we generally only plot the positive half):

download-2

The neat thing about this complex-exponential Fourier representation is that multiplying a signal by a complex exponential is a pure shift in the Fourier spectrum domain. So, by multiplying by the complex exponential at the negative of a particular harmonic frequency, we can shift that harmonic down to 0 Hz (d.c.). The spectrum is no longer mirrored around zero, so the imaginary parts won't cancel out, but we only want its magnitude anyway. Then, by low-pass filtering (i.e., smoothing) that waveform, we can cancel out all the other harmonics leaving only the envelope of the one harmonic we are targeting. By smoothing over a window that exactly matches the fundamental period, we can exactly cancel all the other sinusoid components because they will complete a whole number of cycles in that period. See make_piano_examples.ipynb for how these figures were prepared:

download-1

download

Finding harmonic frequencies and piano inharmonicity

The heterodyne extraction allows us to extract sample-precise envelopes for harmonics at specific frequencies, but we need to give it the exact frequencies we want it to extract. Again, in principle, this is straightforward: The harmonics should occur at integer-multiples of the fundamental frequency, and if the piano is tuned right, we already know the fundamental frequences for each note.

It turns out, however, that piano notes are not perfectly harmonic: They can be well modeled as the sum of fixed-frequency sinusoids, but those sinusoids are not exact integer multiples of a common fundamental. This is a consequence of the stiffness of the steel strings (I'm told!) which makes the speed of wave propagation down the strings higher for higher harmonics. This piano inharmonicity has been credited with some of the "warmth" of piano sounds, and is something we want to preserve in our synthesis. In order to precisely extract each harmonic for each note, we need to individually estimate the inharmonicity coefficient for each string (because the strings are all different thicknesses, the inharmonicity varies across the range of the piano).

I estimated the inharmonicity by extracting very precise peak frequencies from a long Fourier transform of the piano note, then fitting the theoretical equation f n n 1 + B n 2 (from this StackExchange explanation) to those values.

download-4

download-5

These plots come from the "Inharmonicity estimation" part of piano_heterodyne.ipynb. Estimating the inharmonicity for each note allowed me to extract harmonic envelopes precisely corresponding to each specific harmonic of each piano note. This was important because when we are interpolating between different harmonic envelopes, we want to be sure we're looking at the same harmonic in both notes.

Describing envelopes

We can extract sample-accurate envelopes for as many harmonics as we want for each of the real piano note recordings we have. But to turn them into a practical additive-synthesis instrument on AMY we need to think about efficiency. Right now, the harmonic envelopes are represented as 44,100 values per second of recording, and we're modeling something like the first 5 seconds of each note (the low notes can easily be 20 seconds long before they decay into oblivion). But the AMY envelopes can only handle up to 24 breakpoints (it used to be 8 before I changed it to serve this project!) so we need some way to summarize the envelopes in a smaller number of straight-line segments (which is what the envelope breakpoints define).

Additive synthesis of individual harmonics is so powerful because sounds like piano notes are so well described by a small number of harmonics with constant or slowly-changing frequency and amplitude. Looking at the four harmonic envelopes extracted in the previous section, it's obvious that the long tails are almost entirely smooth and could be described with a small number of parameters. The initial portions are more complex, however, including going up and down. Reproducing them will need more breakpoints, and it's not immediately obvious how to choose those breakpoints to give the best approximation.

I spent a while trying to come up with ad-hoc algorithms to accurately match an arbitrary envelope with a few line segments - see the "Adaptive magnitude sampling" section of piano_heterodyne.ipynb. The goal was to choose breakpoint times (and values) that preserved the most detail of the original envelope, even though I couldn't define exactly what I wanted. However, the outcome was naturally that there would be different breakpoint times for each note, which made interpolation between different notes very problematic - can we interpolate the breakpoint times too? (I tried it, and it fared badly in practice because of wildly varying allocations of breakpoints to different time regions).

In the end I gave up and used a much simpler strategy of predefining a set of breakpoint sample times that are constant for all notes. Although this is inevitably sub-optimal for any given note, it gives a much more solid foundation for interpolating between notes. And it turns out that the accuracy lost in modeling the envelopes doesn't seem to be too important perceptually. To respect the idea that the envelopes have an initial period of considerable detail, followed by a longer, smoother evolution, I used exponential spacing of the sampling times. Specifically each envelope is described by samples at 20 instants: 4, 8, 12, 16, 24, 32, 48, 64, 96, 128, 192, 256, 384, 512, 768, 1024, 1536, 2048, 3072, 4096 milliseconds after onset. After the initial value, these are in a pattern of 2 n and 1.5 2 n milliseconds, giving the exponential spacing. This plot shows the original envelope with the breakpoint approximation superimposed, on two timescales - the first 150 msec on the left, and the full 4.1 sec on the right (with the 150 msec edge of the left plot shown as a dotted line).

Although a bunch of detail has been lost, it still provides a suitably complex character to the resyntheses.

These 20 envelope samples, along with the measured center frequency, are then the description for each of the up-to 20 harmonics describing each of the (7 octaves x 3 chroma x 3 strike strengths) notes, or about 1200 envelopes total. This is the data stored in piano-params.json and read by tulip_piano.py.

This is a lot of data to get your head around! It's 4 dimensional - envelope magnitude as a function of time, harmonic number, fundamental pitch, and strike strength. There's still a lot of investigation to be done, but here's a 3D plot of the modeled harmonic envelopes (up to harmonic 20) for the three different strike strengths of C4 (261.63 Hz):

We can see the trend of energy dropping off with harmonic number in every case, and the harmonic magnitudes getting larger for stronger strikes. But notice also that while the max magnitude of the first harmonic (purple) increases from 85 dB for _pp_ to 110 dB for _ff_ (about 25 dB), the max energy of the 20th harmonic (red) increases from under 20 dB to over 60 dB - maybe a 45 dB increase. This is the relative "brightening" of the timbre for louder notes.

If you have a Tulip or want to try Tulip on the web, you can play this piano synthesis live with a MIDI device. Use the Voices app to switch to the dpwe piano (256) patch, or type midi.config.add_synth(channel=1, patch_number=256, num_voices=4).


DAn Ellis - dan.ellis@gmail.com

Want more? Try Tulip

Run more AMY experiments in a REPL with Tulip for the Web. Try the piano there!

Discord

Join the shore pine sound systems Discord to chat about Tulip, AMY and Alles. A fun small community!

Github

Check out the AMY Github page for issues, discussions and the code.

Join our email list

We'll send you very rare updates about Tulip, Alles, AMY and other projects we're working on.