-
-
Notifications
You must be signed in to change notification settings - Fork 33
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Feedback, features propositions, more questions (time, duration, seeking, audio samples, etc) #6
Comments
Thanks for nice feedback :-)
The sequencer timing happens inside the WebAssembly synth based on actual samples rendered, If you want to have a go at this, you'd have to expose the tick through the AudioWorkletProcessor.
So this would have to be done in AssemblyScript, and it would have to be interfaces for filling memory with sample
And you can see more examples of how to generate sounds here: https://www.youtube.com/watch?v=wP__g_9FT4M Also yes I want more presets. I have some instruments already in the sources, but I think it would be easier
Naming... yes ... not sure what to call it yet.. Maybe something in the direction of w-awesome, like pronouncing WASM :) |
@petersalomonsen thank you for the reply! :)
My 5 cents here is since the project is targeting JS devs, I guess it make sense to make it in json format, so users can easily, write, save .json files/patches, should be a good UX :) At the moment I cannot decide what I would like more to start working on, 1. time, duration, scroll or 4. Adding and making music with Yoshimi, asap :) Will probably work on this and that, and we'll see where it will take me.
Do you have any thoughts on the timeline, when you would be working on this? Also, I guess makes sense to use something like https://github.com/jazz-soft/JZZ as they claim to support both node.js and browser? |
You just put Yoshimi on top of my list :) I had to test it yesterday. Managed to build it, and will start looking into integrating it. Those sounds are just amazing! |
@petersalomonsen Hah! I'm in the same mood, playing with it right now, these pads sound amazing, perfect for my darkish ambient things, don't understand why it's not much more popular! It's not a problem to play it in browser with JZZ midi library, but I cannot make it work and output audio in node.js, following your example. I understand now, that I could just use the provided glue code and abstract away AudioWorklet somehow, to pipe it to SOX, but meeh, cannot wrap my head around it, never worked with .wasm in node.js before, need your help with this! Really hope you will show some examples on how to pipe it to SOX! :) |
Interestingly, I also was able to play the AudioWorklet browser version directly from node.js env by sending midi directly to browser. JZZ library does provide what it promises, it works same both in browser and node.js! Just, from what I noticed overall these days, there is still big performance difference, when I play audio from node.js piped to SOX, and AudioWorklet, SOX loads my CPU at 20%-25% rate while AudioWorklet version is at 45%-50%. |
I'm looking into the pure browser based approach. Managed to control Yoshimi from javascript here (click the play song button ) https://petersalomonsen.github.io/yoshimi/wam/dist/ this is a song I wrote in the nodejs setup, now translated to web. so I think this will work fine with the live coding environment. I think for the web it's probably better to export wav directly from the web page instead of using sox. |
yoshimi support is here: #8 |
Cool!! Was playing around it, sounds very good, but it's kinda hard to grasp how the sequencer works, for example, in provided song example, I could not find a way to hold the pad notes longer than it is now, how could I hold the note for 5 or 15, etc, seconds? I tried to adjust
|
regarding note lengths consider this example:
|
demo video: https://youtu.be/HH92wXnP4WU |
@petersalomonsen Hey, it's working very good, it's possible already to compose whole songs! Thanks for your previous reply and explaining things! I was following the whole progress, looks like what's left now is to implement .wav rendering and adding audio samples, or merging Yoshimi with AssemblyScript version for those. Meanwhile, I was working on Time, Duration, Seeking and have a bit of success with the first two in AssemblyScript mode. but I guess I'll wait when you will finalize the whole version so I can proceed with those. I noticed that
|
The xml file is for controlling the whole Yoshimi synth. If you load multiple instruments, you'll see that they all appear in the xml file. For OfflineAudioContext in the Yoshimi context there's no worry about the timing, as the sequencer player is embedded into the audioworklet. So it's just to start it and it will trigger the midi events from within the audioworklet processor. |
export audio to wav, work in progress: already made an export that is sent to spotify: https://twitter.com/salomonsen_p/status/1261404943484772353?s=20 |
Export wav implemented now. |
Hello Peter Excellent Project. I was wondering if you have in mind to build a documentation about your project? I have spent the last 3 days trying to understand how your code works and until now it is difficult for me to understand how the sound is generated through Assemblyscript. I'm so used to creating synthesizers quickly in TONE.js, but I guess I'm pretty noob for your code >:'c. |
thanks @HadrienRivere :-) The plan is to add more documentation and tutorials, but it's great to get questions so that I get an idea about what is unclear to others. I don't know TONE.js very well, but after checking it quickly I think the main difference is that in TONE.js you declare the properties of your synth ( envelopes, waveforms etc ), while in my project you calculate every signal value that is sent to the audio output. So it's more low level, but also much more control. I've tried to make a minimal example below ( and more explanation follows below that ). Try pasting in the following sources: sequencer pane (editor to the left) - javascript: // tempo beats per minute
global.bpm = 120;
// pattern size exponent (2^4 = 16)
global.pattern_size_shift = 4;
// register an instrument
addInstrument('sinelead', {type: 'note'});
playPatterns({
// create a pattern with notes to play
sinelead: pp(1, [c5,d5,e5,f5])
}); synth pane (editor to the right) - AssemblyScript: import {notefreq, SineOscillator} from './globalimports';
// pattern size exponent (2^4 = 16 steps)
export const PATTERN_SIZE_SHIFT = 4;
// beats per pattern exponent (2^2 = 4 steps per beat)
export const BEATS_PER_PATTERN_SHIFT = 2;
// create a simple oscillator ( sine wave )
let osc: SineOscillator = new SineOscillator();
/**
* callback from the sequencer, whenever there's a new note to be played
*/
export function setChannelValue(channel: usize, value: f32): void {
// set the frequency of the oscillator based on the incoming note
osc.frequency = notefreq(value);
}
/**
* callback for each sample frame to be rendered
* this is where the actual sound is generated
*/
export function mixernext(leftSampleBufferPtr: usize, rightSampleBufferPtr: usize): void {
// get the next value from the oscillator
let signal = osc.next();
// store it in the left channel
store<f32>(leftSampleBufferPtr, signal);
// store it in the right channel
store<f32>(rightSampleBufferPtr, signal);
} The key to sound generation is the In my more advanced examples I mix more instruments, and also make them richer by mixing ( adding ) waveforms, applying echo and reverb. So what is different from TONE.js I guess is that in this projects you set up the math and calculate every sample for the audio output, rather than just declaring the properties of your sound. Still I think it can be done quite simple this way, giving you much more control, and I'm also working on reducing the amount of code required. Hope this helps. Let me know if I pointed you in the right/wrong direction here :-) |
@HadrienRivere
I think when starting making music with "javascriptmusic" (we still need good official name for this :D ) you should NOT try doing things like you did with Web Audio/Tone.js but think of terms classical DSP and how music instruments/plugins actually work. Yeah, it also involves a bit of math, which may scare regular JS guys like me, clearly Peter has a more advanced experience as an audio and software developer overall, so for him the things he does in his code are quite easy and obvious and would be probably done the same way in any other language like C/C++, Rust etc. Yeah, I think this is important to NOTE, again, even though Peter uses JS/Typescript overall it's not bound to the browser that much and those algorithms could be used the same way in any other language/stack, while manipulating audio with Tone.js/Web Audio kinda is limited to the way we do audio in browser only. I hope I did not confuse anyone, I'm quite a beginner too in this field, and it took me a while to realize how DSP and music programming should be actually done. It really helps seeing more low level projects written in C/C++, you may not know those languages but it helps understanding why the things are done the way there are, also it's really fun seeing how years ago people have been writing code in complex low level languages, while nowadays you can do same thing in something like JS/Typescript. Yeah, WASM is truly AWESOME! :) |
@Catsvilles thanks :) And BTW: I'm using |
Yeah, I think it's suitable name for this project! :) |
@petersalomonsen Hey Peter, as promised I started putting together a list of PadSynth implementations for an inspiration, I guess. :) I was pretty sure there are few made with Typescript but unfortunately for some reason I could find only JavaScript ones for now. Instead of creating a new issue here I decided to go with a new repo as already for a long time I wanted to put together list of cool things related to Web Audio :) Also I found something in Rust if this is any help: Actually last few months I was actively getting into Supercollider (they even have VSTs now) but I'm missing doing things with JavaScript, so maybe someday I will go on exploring this project and trying to synthesize and sequence sounds with JS and AssemblyScript :) |
Thank you for this @Catsvilles . The padsynth was a really good tip, and I really don't know how I haven't heard about it before. I've obviously been using it in Yoshimi/ZynAddSubFX, but there I just used the presets. After studying the algorithm I found that I've done some of the same when synthesizing pads myself. The concept of spreading out the harmonics is something I've done a lot, but randomizing the phase as done in padsynth improves it a lot, and that's the part I've been missing. One downside of the padsynth algorithm as it is described is that it requires to precalculate the wavetable, which results in quite a delay when changing parameters. So I started exploring an alternative approach that doesn't require any other precalculation than the periodic wave. And instead of having the harmonic spread in the IFFT, I play the periodic wave at different speeds. As far as I can see this gives the same result, it costs a little more in real time calculation, but the startup time is significantly reduced. Also because this way I can have a much smaller FFT giving the same result. My first attempt is a simple organ, check out the video below. I have to play a bit more with this to see if I'm on the right track, but so far it seems ok :) |
Hi, I know, I know, last time I was after node.js version but it was before I realized that actually WebAssembly does everything I would ever need, and we can render and play audio server-side in node.js+sox same as in browser, WASM should be truly pronounced as AWESOME! :) Last few days I spent creating and live coding music in the browser and I got few ideas and feature requests, help offers:
Tracking current time while playing the song, calculating full song duration, seeking:
Now, for the first one I know there are already few attempts to track time with
logCurrentSongTime()
in pattern_tools.js, but unfortunately I could not hack it to work good. I found another way that works good - tracking currentTime of global.audioworkletnode.context. Now, in my experience developers use setInterval() function a lot, when making music players with JS, etc but I'm sure there are better ways to dynamically update and log current time while playing. I would dig more into logCurrentSongTime() function and check how Tone.js does things, they have Transport class with.scheduleRepeat ( )
function, from what I know they use time precise clock of WebAudio, not JS one. More about this: hereIn any case, one way or another getting current time while playing should be fairly easy to implement, but getting full duration and seeking, this is where I'm limited in ideas, I just believe that it would be a cool feature and good user experience to get full song duration after we evaluate the code, then we could have a simple horizontal slider which scrolls while song is playing, and we can update the current position. This would be a cool feature, so we don't have to wait one part playing through when we actually wanna heart next patters. What do you think? With your blessings and guidance I'm ready to get on this one and submit PR as soon as I hack it together. So, you could just give me your thoughts and theory behind it, I will do the coding if something. :)
Audio Samples. Okay, this one is straight feature request, I remember you mentioned something about this in our previous issue discussion, I'm sure you are already planning those in any case, just wanna mention that it would be cool to have this freedom of adding custom audio samples, for cool kicks, drums, atmos, percussion, etc. And also would be cool if this would work same as whole AssemblyScript synth, both in browser and node.js+sox for quick audio rendering, so I guess we would have to implement custom AudioBuffer/AudioBuffersource in typescript, without using browser's AudioContext? Once we have AudioBuffersource it should be fairly easy to implement something like Sampler , allowing the user to play multiple samples with predefined pitches. I'm ready to help with this one too, let me know what you think!
Now, a bit of a feedback of a user who tried to create some music in the browser... Mixes... oh, I fairly know JS, never touched TypeScript before, and overall doing DSP stuff, making instruments so you can later code patterns with them, oh, this is quite exhausting for unprepared user :D I guess the current system holds quite high entry threshold for newcomers. Maybe it would be possible to implement some kind of presets of ready instruments/effects like other live coding environments do (read Sonic Pi)? I know, I'm probably over my head here, sorry, just sharing my thoughts. :)
I decided to forget about other ways of live coding music, like on server with node.js, and stick with AssemblyScript implementation you proposed but still I'm thinking that it would be cool to have some freedom of using other synthesizers with current sequencer and interface. For example, there is WASM version of Yoshimi I believe we could use it too, for composing in browser and rendering the audio on server with sox. Just, from what I understand the current sequencer is coming after your 4klang experiences, would it works with other synths, I mean, would it be possible overall? What do you think?
Thanks for reading, sorry, for that much of text, I hope I'm not being too much annoying here! One more thing, you should finally name your project, so people can be like:
The text was updated successfully, but these errors were encountered: