Issue with Livewire behind a loadbalancer

What seems to be the problem:

We’ve built a large-scale business management system for a medium sized business and have deployed this behind a load balancer with two web servers. We’re finding that requests and results are intermittent and as a result data gets scrubbed from forms / components as the user is bouncing from one server to the next.

Steps to Reproduce:

Set up a load balanced hosting environment with a minimum of two web servers. Interact with a page that contains multiple livewire components and weep. :joy:

Are you using the latest version of Livewire:


Do you have any screenshots or code examples:

I can’t share this publicly due to sensitivities about the project but happy to share privately subject to an NDA (corporate world sucks)

Following up to this, the issue appears to be as a result of the load balancer on forge operating in a round-robin fashion rather than persisting connections to a single server. In reality, the session stickiness kind of defeats the object of a LB environment so keen to see if there’s a way to get round this without relying on sticky sessions.

I understand this is an old issue, but still might be relevant for others (yes you can find old stuff on Google :wink: )

Laravel Forge can also use “IP Hash” which would cause the same users to always request the same server.

But I am interested in knowing if this is in fact an issue for Livewire - as long as the session is shared amongst the servers using one of the centralized drivers I don’t think there should be any problem?

If your application will be load balanced across multiple web servers, you should choose a centralized store that all servers can access, such as Redis or a database.

I’m a big believer that “sticking” sessions doesn’t really solve the problem here Anders.

There’s an open PR over on github that highlights the problem and fixes it:

1 Like