We faced an interesting issue at Tyro this week.
There was a sudden spike in errors in our customer application.
The errors were related to the following exception:
JDBCConnectionException: Unable to acquire JDBC Connection
We could so easily conclude that a database is either down, or the driver has failed to connect.
The application had no code changes or database updates and there were no network problems and yet inbound HTTP calls made to the customer application were hanging for more than 20 seconds.
After digging deeper into the problem we discovered that there was no database issue at all.
Coincidentally another completely separate API server was having its own issues with the CPU maxing at 100%.
We initially ignored this as we were having "database issues" rather than "HTTP endpoint issues".
After further discovery, we found that the customer application was indeed calling endpoints to the API server, and although we received a couple of errors relating to I/O issues when that API server was rebooted, it didn't concern us that it would be the cause of any of the database connection issues we were encountering.
We had a look at the code and found the following:
@Transactional(readOnly = true) public CustomerData getCustomerData(String id) { // Run a SELECT query against the DB Customer customer = customerRepository.findById(Id); // Run a SELECT query against the DB CustomerDetails details = customerDetailsRepository.findById(Id); // Call the API server to retrieve other information CustomerAddress address = apiClient.getAddressByCustomerId(mids) // Run a SELECT query against the DB Location location = locationRepository.findByCustomerId(Id); return new CustomerData( customer, details, address, location ); }
At first glance this seems vaild as any of the calls to the repositories run relevant queries in the database and return results, then seemingly closes the database connection and moves onto the next line of code.
The key factor here however, is the @Transactional annotation.
ChatGPT states:
If your method is inside an open Spring @Transactional and you've already touched the database, a hanging HTTP call can keep a JDBC connection checked out of the Hikari pool until the transaction ends. That can starve the pool and cause timeouts for other requests.
In a Spring @Transactional, the persistence context is created and a JDBC connection is obtained the first time you hit the DB.
Once obtained, that same connection is held for the entire transaction until commit/rollback (i.e. the function completes its invocation and is returned).
If your code then makes an external HTTP call that hangs before the transaction completes, the connection remains checked out, which means Hikari pool pressure and a SQLTransientConnectionException (pool exhausted) could happen for other threads.
If the HTTP call happens before any DB interaction and nothing triggers a flush, the connection might not yet be acquired so although the code may hang, there may not be any Hikari pool impact.
If the remote server spikes at 100% CPU usage and stops responding, this does not return an error response to the calling HttpClient.
The client opens a TCP connection to the target server and sends the request bytes, then waits for the server’s response (status line, headers, then body).
If the server’s CPU is fully utilized, it may accept the TCP connection but never process the request, which in turn does not return any response bytes, which means the socket remains open but idle.
The client would be waiting for data and does not technically fail, so the client thread can sit blocked indefinitely unless you define limits.
By default, most HTTP libraries do not time out idle sockets unless you configure explicit timeouts.
This fact helped us understand one simple rule of thumb:
Never make HTTP calls within @Transactional contexts to avoid keeping unnecessary database connections open which can starve your connection pool with unrelated HTTP issues.
.png)

