-
Notifications
You must be signed in to change notification settings - Fork 0
Research
The research part of the wiki. this can be any notes of any kind for the project.
A common point in the following research is the price is always a factor. Redis for now seems like a perfect option, but is relatively expensive.
Roughly outline the core requirements of the shared state of the system. This is the notion that clients may not be connected to the same instance that is handling the game. We will use some form of shared state to communicate these core pieces of data.
My view is that designing the system upfront with this in mind helps if doing any scaling is on the table.
Requirements
- per-game we need a list of clients participating
- per-game the message hub will be broadcast to all clients in near real time
- clients can (re)join an ongoing game and get all the game data for the current game
Since the games are designed around many clients connected to the system and receiving data from the server for that specific game, we need to solve the session affinity problem.
This is to say that if the games are being handled by a single instance, we are able to communicate and broadcast data to clients about a game all in memory. When we now say we can have 3 separate instances, we have a problem that the game on one instance then requires the players of that game to connect to that same instance.
In most cloud provided systems, this is not something that is done.
So we can solve this in several ways, but here are some notes on the subject.
Socket.io seems to be a popular websocket connection framework that comes with a Redis shared adaptor. This seems like a nice idea where Redis is a broker between instances.
https://github.com/socketio/socket.io-redis
This also comes with a giant AWS Fargate blog post that can help us get quite far.
Google back in the day did this by having the WebSocket server on managed compute servers https://cloud.google.com/solutions/real-time-gaming-with-node-js-websocket#using_compute_engine_for_websocket_server. This seems to be OK but I would like to avoid managing instances to ensure cost is fully flexible.
This would also mean that our application code would need to include the notion of what server or URL to return for a game. This does not seem ideal for the use case.
Another approach is to share state in a data store other than memory. This seems like the right approach. Some thoughts around this are mostly that of performance and cost. In the case of the game, the tolerance of latency is low, I think since part of the game is guessing before others. Timing is a part of the game on if a write to the message board takes > 200ms does that affect the experience?
Here we can start with SQL relatively easily and performance can come with the need. More data is needed to spike this one out, but this seems to be the all-round answer thus far.
The general idea is that NoSQL scales well in certain ways. For our little app here, we want to remain simple and functional.
AWS DynamoDB has some examples of a realtime chat application using DynamoDB https://github.com/aws-samples/simple-websockets-chat-app
Discord is not similar but found this to be a good read. https://blog.discord.com/how-discord-stores-billions-of-messages-7fa6ec7ee4c7
This seems like a promising route to go, but as I'm not experienced in the area, I need to gain more practical experience to make this a reality. Also, the scale will probably not be an issue.
An interesting notion would be to do everything using Peer-2-Peer communication channels, skipping the need to have a central server doing the shared state, but instead have the players all interconnect.
https://blog.logrocket.com/get-a-basic-chat-application-working-with-webrtc/
This seems interesting but presents a new set of challenges around failure modes and limits of number of participants.