Member-only story

From 8 Second to 200ms API Responses: Rearchitecting Our .NET Caching Layer

Is It Vritra - SDE I
6 min read2 days ago

When our team noticed API response times climbing from milliseconds to seconds, we knew we had a problem. What we didn’t expect was how our caching strategy — meant to improve performance — was actually the real issue. lets you and me discover, debugg, and ultimately solving complex caching issues in our .NET microservices architecture.

🔹The Initial Architecture

Our system processed financial transactions for a global payment platform [as I mentioned in previous article too…], handling roughly 50,000 requests per minute during peak hours.

The architecture consisted of:

▪️6 microservices handling different aspects of payment processing
▪️A mix of Redis and in-memory caching
▪️Postgres as the primary database
▪️Azure Service Bus for inter-service communication

The caching layer was originally designed to reduce database load and improve response times. Each service maintained its own cache, with a combination of:

// Initial caching implementation
public class CacheService
{
private readonly IDistributedCache _distributedCache;
private readonly IMemoryCache _memoryCache;

public async…

--

--

Is It Vritra - SDE I
Is It Vritra - SDE I

Written by Is It Vritra - SDE I

Going on tech shits! AI is my pronouns

No responses yet