Posts

Avoid Multiple Cache Refreshes: The Double Check Approach

 In previous articles, we've stressed the importance of caching to enhance the performance of our applications. This time, we're discussing a small yet potent tip to further amplify the benefits derived from caching. A standard caching routine often looks like this: This code is 'functional' and can be regarded as the 'default' approach to caching. Here, we're fetching a value from the cache, and if it's missing, we generate it and store it for future requests. However, a problem arises when we deal with a high-traffic application, such as a .NET Core web application or API, which must handle many concurrent requests. Suppose multiple requests reach this code simultaneously, each finding that it needs to generate the value. In such a case, you'll experience "multiple" refreshes of the same value and several calls to SetValue. To prevent this, we can employ a mutual-exclusion (mutex) lock to restrict multiple threads from accessing the sam...

The Power of Simplicity: How a random iterator saved the day

Image
 …and my database. How many times has something so simple saved the day? Sometimes, the simplest solutions can have a significant impact on a problematic situation. In this blog post, we'll explore one such scenario where a small algorithm change led to substantial performance improvements in an existing system. The problem  The issue at hand involved a component in a system using a database table as a makeshift queue for processing updated rows. Multiple processor instances read from the same table, with Redis locking in place to prevent concurrent processing of the same row. However, the processors working in the same order led to numerous collisions and timeouts, causing the system to slow down.  As illustrated in the image, the processors were interfering with one another; while they ultimately completed the work, excessive time was consumed in fruitless attempts to lock records, which consequently increased the strain on the database. The solution: A Random Iterator ...

Building an Active/Standby HA Architecture with Queue-Based Microservices using Azure Functions

Image
In recent years, the microservices architecture has gained traction owing to its remarkable scalability and adaptability. Azure Functions, a serverless compute service, facilitates the development of dynamic, scalable microservices. When designing microservices for enterprise systems, achieving high availability (HA) is crucial. This blog post delves into a strategy for constructing a HA architecture that incorporates queue-based microservices through the use of Azure Functions. Challenges with Queue-Based Microservices Creating HA queue-based microservices using Azure Functions comes with some design challenges. One of them is deciding between an Active/Active or Active/Standby architecture. While having an Active/Active architecture is trivial, it carries a cost from inter-region traffic and compute resources. Additionally, there may be scenarios where multiple consumers for the same queue are not necessary or desired, making Active/Standby a better option.  But there is a proble...

Saving costs in the cloud with smarter caching - Part 1

Image
A cache is a component where data is stored so that future requests can be served faster. So for example, in the context of a web application responding to multiple requests,  instead of hitting your backend database or microservice every time, the application can remember the last value for a given computation or call. One particular flavor is a centralized cache system, provided by tools like Redis, Memcached, etc. A central place where we store data temporarily and all instances of our app (and even other apps) can use it. There are many uses, from storing session data in a multi-server web application to providing a performance advantage by keeping a value that is costly to calculate or obtain.  A centralized cache is a lifesaver. But it tends to be overused and there are some scenarios where we could reduce its usage or completely skip it, and in doing so, save costs and improve performance.  Reducing the usage of the centralized cache. Let's have an example of a web...

Hello Thanos: static web app with azure function

Image
We are going to create a plain HTML+JS static web app with an Azure Function as backend API, and by "plain" I mean no framework like React, Angular, or Vue, but will use Bootstrap and JQuery. It will be a simple app with  2 input fields and 1 button. You will enter your name and last name and the app will say "hello" to Thanos and let you know if Thanos snapped you or not (Because you should not talk to the mad titan without permission). We are going to use ReCaptcha V3 to prevent abuse and a bootstrap form with client-side validation before sending the data back to the azure function. #1 Create the app from Visual Studio Code Let's create a "custom" framework static web app following these instructions , then add a /src folder and place a basic index.html file there with the text "hello world". Just make sure you use /src as "source path" and leave "build path" empty. You can confirm paths are correct by opening the GitHu...

The case for a 4 week sprint

Image
Every time we talk about SCRUM, the length of the sprint is stated as a period of "two to four weeks", rarely, however, have I seen people going outside of the 2 weeks. This is not written on stone of course, and while 3 weeks might seems odd, there is a benefit to using a longer period for your sprints.  Image is taken from scrum.org Without any hard data to back that decision, the answer I get from asking about this decision seems to be related to "delivery speed", we assume that because we deliver value to the business every 2 weeks, we are in a more "agile" environment. Well, that might not be so true in every sense or for every project. Let's break some arguments: 1. Sprint length is not equal to the speed of delivery There is more to the speed of development and delivery than the timeframe of the delivery cycle, in a world with DevOps and full CI/CD implemented for your project, you could (should?) actually deliver features every single day all t...

Emergent Architecture is not a thing

Emergent architecture is a term I heard for the first time from a coworker a while ago. It was touted as a combination of agile and architecture, or perhaps as the way the shiny term "architecture" was inserted into something as face paced as Scrum, or as a way for the company justify not having architects, but to put it plainly, it's not a thing. There are a few things wrong with it... conceptually.  I have read a lot about it, looking for any source of information I could, presentations, blogs, some links here and there, and I could not find a solid base for what they claimed it was, or any reason for it to be claimed as a thing, except for the need to name something with "architecture" on it. Agile is a methodology, Scrum is a method, if you want to know a bit more about the difference just hit Wikipedia . They are related to  how to build software.  Now, architecture, is about structure, is about what you build. You can reach that structure with any me...