Ok that explains some of my performance woes - I’m in Japan, not USA so probably significant latency issues there.
But separately there are other strategies you can use so that users don’t have to wait so long for stuff to happen - most obvious to me is prefetching and batching data, and then batching updates async. When I do a study session it can just load the entire session up front + answers into memory, and then as I respond it can confirm on screen, but flush to servers asynchronously so that the user doesn’t have to wait on an HTTP round trip every time they interact with the site.
That’s what the app I use for WaniKani on my phone does (Tsurukame), and it makes it a joy to use. In fact it just keeps everything I’m in the process of studying in local storage so that I can use the app fully offline, and just sync up / get new content when I reconnect to the network. So no delays on anything ever, i never have to wait on network calls in practice, just the occasional “new lessons in my queue don’t show up until I hit the network” which is basically unnoticeable to me. That’s even better.
If opening HTTP connections is causing slow times you can also approach it by leaving a websocket open to save on HTTP/TCP handshake overheads, but it seems overkill to me. I’d just go with caching + batch sync strat above, avoiding the network altogether and batching together what would be a bunch of round trips into one, is much more sure to be performant. In my case almost certainly network delay is a major issue, since I’m just geographically far.