I have an Express Node API on Heroku. It has a few open routes that makes upstream requests to another web server that then executes some queries and returns data.
So client->Express->Apache->DB2.
This has been running just fine for several months on a Heroku Hobby dyno. Now, however, we exposed another route and more client requests are coming in. Heroku is then throwing H12 errors (bc the Express app isn't giving back a response within 30S).
I'm using axios in the Express app to make the requests to Apache and get data. However, I'm not seeing anything fail in the logs. No errors are getting caught to give me some more details about why things could be timing out. Investigated the Apache->DB2 side of things and the bottleneck doesn't seem there and is almost certainly on the Express side of things.
Per Heroku's advice, I connected the app to NewRelic but haven't gained any insights yet. Sounds like this could be a scalability issue with Express and a high number of new requests coming in at a short period of time? There's not particularly that many. i.e ~50/min at highest. Would beefing up the Heroku dyno do anything? Any other ways to actually see what's going on with Express?
It seems like 10-15% of the client requests are receiving the timeout (and seems to happen at a time when there's lots of incoming requests)
Thanks!
question from:
https://stackoverflow.com/questions/65886462/expressjs-random-timeouts-on-heroku 与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…