Multiple concurrent daemons for HA/scalability
I am currently running two mattermost daemons on separate VM's doing exactly the same thing (same nfs share for files, same MySQL cluster backend for database), to have HA.
In my original setup I had them loadbalanced too (meaning individual user sessions got spread over both backends), however I noticed notifications were not always working.
I got over this issue by reconfiguring the loadbalancer and putting the mattermost backends in active/stand-by - all sessions are now assigned to one backend (the 'active' one) until it fails (reboot, upgrade, etc.); then the stand-by becomes active all sessions move to the other. This works perfectly.
I did not go into detail about the reason, but I would suspect that notifications are sent inside one daemon only.
Eg. user x sends message to user y. Mattermost daemon receiving message from user x will try to notify user y about the message. If user y happens to be on the same daemon, all is well; if user y is on the other daemon, he will never get the notification. He will however see the message when refreshing, because then data is pulled from the database.
Obvious solution would be to make every daemon poll the database periodically (at least once a second?) for any new messages.
Brent Gardner commented
I would like support for multiple concurrent daemons for another reason: The ability to run Mattermost on a shared hosting environment, without needing VMs. Each instance would have its own config.json, but otherwise have common binaries. This may be impractical due to performance ramifications, but it is desired nonetheless.
Richard Hordern (Monarobase) commented
Instead of polling the database, would it be possible to have deamons know that there are other deamons and have a way to notify each other ? (maybe with an api call ?). Polling database every x seconds would use a lot of unnecessary ressources and unless it's run every second wouldn't be very fluid.
I think this feature makes alot of sense and would make mattermost much more scalable.