Press "Enter" to skip to content

159 search results for "arun sirpal"

Azure Redis Cache Geo-Replication

Arun Sirpal shows how to set up geo-replication in Azure Redis Cache:

The concept of a geo-replicated partnership between a primary and secondary node is very similar to that of something you may have seen with Azure SQL DB, where the primary handles all R/W and then the changes are pushed to secondary ( async). This is no different with Redis.

Read on to see what limitations exist and how you can set up geo-replication.

Comments closed

Persisting Data in Azure Redis Cache

Arun Sirpal feeds the mogwai after midnight:

I mentioned before that you could use the idea of data persistency to rebuild your data from total failure. There are two types. RDB and AOF.

RDB – persists a snapshot of your cache in a binary format. The snapshot is saved in an Azure Storage account. AOF – saves every write operation to a log. The log is saved at least once per second into an Azure Storage account. 

I’m a big proponent of using Redis as a caching service. I’m not a big proponent of using Redis as a persisted database, mostly because I’ve had a lot of bad experiences with persistent Redis…

Comments closed

Azure Redis Tips

Arun Sirpal enumerates some advice:

My learnings on Redis thus far which you may find useful:

1. Location of Redis should be close to your app.

2. Data structures within Redis, larger key value sizes lead to fragmentation of memory space and these larger memory requirements means more network data transfer, Redis states to use 100KB maximum, this will affect the transfer time allocated from the app. It could time out if the data request is big.

Click through for the rest of Arun’s advice. My advice on the 100KB maximum is that it really should be closer to 100 bytes or 1KB max in practice, especially for storing data which differs by entity (user, customer, organization, whatever your domain uses).

Comments closed

Making Redis Do Your Bidding

Arun Sirpal looks at some of the command language for Azure Redis:

Now that we have created our Redis Cache lets connect to it. You can use the most common tool redis cli.exe https://redis.io/download or as I am going to do, use the Azure Portal to use the console directly, this isn’t probably the best way but it’s the easiest for this blog. 2 key points here:

Read on for those points, as well as examples of commands you can run.

Comments closed

Creating an Azure Redis Cache

Arun Sirpal continues a series on Azure Redis:

Remember – basic should never be used for production. Also, if you need dedicated service then you will not want C0 because this is based on shared infrastructure. Redis can get expensive but could be cost – effective especially if you design to use a multi app approach per cache.

I select P1 – Premium with 6GB cache just to talk a couple things through.

As a note, 6GB of cache is a lot in most environments. That’s because your average cached element size in Redis should be measured in single-digit or double-digit bytes, not kilobytes. You’re typically caching individual values, not entire documents, so if you average 64 bytes per cached key-value combo, you can get somewhere around 90 million values in cache at a time. The database call savings add up quickly, considering a really simplistic estimation: if the average number of queries before expiration for a cached item is 3, a single “cycle” of caching saves you about 270 million database calls. That can allow you to downscale your relational databases considerably, saving a lot of money in the process. There’s a lot of hand-waving I’m doing in the math and a lot of complexity I’m wiping away, but both of those tend on average to make the cache more effective, not less.

Comments closed

An Overview of Azure Redis Cache

Arun Sirpal lays out the use case of Azure Redis Cache:

Redis Cache is a well know caching technology and you can run it in Azure as a fully managed service. A common requirement (the most basic one) is doing a workflow like:

1. When an application needs to retrieve data, it will first search to see if it exists in Azure Cache for Redis.

2. If the data is found in Azure Cache for Redis (cache hit) use it

3. If the data is not found in Azure Cache for Redis (cache miss), then the application will need to retrieve the data from Azure SQL (or whatever cloud db back end you use)

4. For cache miss scenarios, the requesting application should add the data retrieved from the Azure Database to Azure Cache for Redis.

This is also known as the cache-aside pattern. If you’re feeling really cheeky, you can combine cache-aside with the decorator pattern to “hide” the cache in your code.

Comments closed

Replication in Azure DB for MySQL

Arun Sirpal explains how you can set up replication with Azure DB for MySQL:

No doubt there will be a need for you to split off your analytical queries from the main database for performance reasons.

If you have been following me in the past with Azure SQL DB you would use failover group read endpoints. With MySQL we would need to build a replica (read only) to another server. This uses MySQL’s native feature binlog replication which is great to hear. This form is asynchronous.

Read on to see how.

Comments closed

Server Variables with Azure DB for MySQL

Arun Sirpal shows how to see and set server variables in Azure’s MySQL offering:

If you have used MySQL before you will know about the system server variables, you know such commands as SHOW VARIABLES; You can access most of them via the Azure portal or connect to MySQL and issues the commands you come to know about.

Let’s see a quick example.

Click through for that quick example, including one minor difference from standard MySQL implementations.

Comments closed

Managed Instance Failover Groups

Arun Sirpal takes us through Azure SQL Managed Instance failover groups:

If you have been following me for a while you will know that I really like the Fail over groups within Azure SQL DB and it is no different to when applying it to Managed Instances. If you want a rock-solid DR plan, this is the way forward.

Remember it’s an abstraction layer on top of the active geo-replication feature, before this we had to do a lot of manual one to one database setups but now this feature simplifies deployment and management of geo-replicated databases at scale. You can initiate failover manually or automatically if there is a massive failure (researching this topic this could mean things from memory leaks to wrong network cables cut during routine hardware decommissioning – you never know, it could happen so plan for it)

Click through to see how to set this up and what failover looks like.

Comments closed

Can't find what you're looking for? Try refining your search: