Documents the differences the API changes compared to LavaLink API, the API design used in NodeLink.
While NodeLink has the same events as LavaLink to ensure the full compatibility, it emits in different times.
NodeLink, differently from LavaLink, emits the TrackStartEvent
when the @performanc/voice
starts sending the audio data to Discord, while LavaLink emits the event when receives the play request.
There's no concept of TrackStuckEvent
in NodeLink, as it's impossible to detect and/or not totally impossible to detect. When a track fails, it will emit the TrackExceptionEvent
event.
NodeLink doesn't have any difference in filtering system, this section is here to show unknown information about the filters.
Filters uses a pipeline to process the audio, allowing the stack of filters to be processed in parallel. For example, you can use both equalizer
and timescale
filters at the same time.
There aren't limitations on how many filters you can use at the same time, be aware of the performance impact of using too many filters at the same time.
Endpoints covers most of the LavaLink endpoints, except the route planner. It also features new endpoints.
Note
ALL responses from both LavaLink and NodeLink will have Lavalink-Api-Version
header, which contains the version of the API.
NodeLink offers compression for multiple compression formats:
- Brotli
- Gzip
- Deflate
Note
Brotli should be used be used whenever possible. Gzip
and Deflate
are meant to be used to ensure compression availability for all clients, systems and languages.
NodeLink enforces the header Client-Name
to be sent to identify the client and to match NAME/VERSION (comment - optional)
format. It's used to identify the client and to ensure that the client is compatible with the NodeLink's API.
Note
The Client-Name
header should be hardcoded in the client, as it will be used to identify the client and not the bot.
It's important to wait for the response of the endpoint to check if there's an error. Both LavaLink and NodeLink gives informative error messages for proper debugging and handling.
As per documented in the LavaLink documentation, the error response is a JSON object with the following keys:
timestamp
: The timestamp of the error in milliseconds since the Unix epochstatus
: The HTTP status codeerror
: The HTTP status code messagetrace?
: The stack trace of the error whentrace=true
as query param has been sentmessage
: The error messagepath
: The request path
However, NodeLink's error message always includes trace
key, containing the stack trace of the error, using new Error().stack
to get the stack trace.
NodeLink doesn't support resuming, as it's not needed. The client should always keep the connection alive and if the connection is lost, the client should reconnect and send the play request again.
NodeLink, although follows most of the structure for /v4/stats
, we made frameStats
to be an object instead of null
, so the client can always expect an object.
For better identification of the tracks, NodeLink introduces more loadType
s, which are used to identify the type of the URL.
album
(playlist-like)artist
(playlist-like)playlist
(playlist-like)station
(playlist-like)podcast
(playlist-like)show
(playlist-like)short
(track-like)
NodeLink features a new endpoint, loadLyrics
, which is used to load lyrics for the track. It's used to load lyrics from the supported sources. Currently it supports the following sources:
- Genius (Generic)
- MusixMatch (Generic)
- Deezer
- Spotify
- YouTube
- YouTube Music
Note
Deezer
and Spotify
requires arl
and sp_dc
respectively to be used.
NodeLink offers a totally new websocket endpoint, /connection/data
, which is used to receive the audio data from the voice connection.
To properly connect to the WebSocket, clients should send the following headers:
Authorization
: The NodeLink's secret keyUser-Id
: The user ID of the user who's connectingGuild-Id
: The guild ID of the guild where the user is connectingClient-Name
:NAME/VERSION (comment - optional)
to identify the client
Messages are sent in plain json. The base structure of the message is:
{
"op": "speak",
"type": ...,
"data": ...
}
NodeLink emits the startSpeakingEvent
type message when the user starts speaking. The data
field contains the following keys:
userId
: The user ID of the user who started speaking.guildId
: The guild ID of the guild where the user started speaking.
The endSpeakingEvent
type message is emitted when the user stops speaking and all data is processed. The data
field contains the following keys:
userId
: The user ID of the user who stopped speaking.guildId
: The guild ID of the guild where the user stopped speaking.data
: The audio data received from the user in base64.type
: The type of the audio data. Can be eitheropus
orpcm
. Older versions may includeogg/opus
.
NodeLink doesn't have a route planner, as it's not needed. It's recommended to use a load balancer to distribute the load between the nodes. It's also recommended to use a reverse proxy to ensure the security and the stability of the system.