Press "Enter" to skip to content

Category: Hardware

Wiring A Raspberry Pi 3

Drew Furgiuele begins his project to build an easy button for backups:

I should also pause for a second and talk about wiring hobby boards like this. Good news first: you won’t electrocute yourself on it. I mean, if you do something really dumb like try to wire it underwater or eat it or something then maybe you could but you shouldn’t ever receive a shock while working with a board like this, even plugged in. The bad news is that even though you won’t damage yourself, you could very well damage the board if you just randomly plug things in. Here’s a hard and fast rule: if you’re not an electronics expert or an electrical engineer, leave it to experts to tell you where and how to wire. I’m not calling myself an expert here, but I have sort of a basic understanding of how to wire these things up. The point I’m attempting at making is: if you want to really learn and understand circuit design, there are lots of great resources of where to get started. And it’s quite a rabbit hole to go down, but it’s well worth your time if you want to learn more.

Read the whole thing.  Over a weekend, with your Pi 3.

Comments closed

SQL Server Easy Button

Drew Furgiuele has an easy button for SQL Server:

Sounds great, right? I bet some of you already already thinking “Oh man, I can’t wait to run the Linux version SQL Server on this thing!” There’s just one really big catch: the CPUs on Pi boards are ARM-architecture based. Unlike modern processors in our desktops and laptops, these chips are more akin to what you find in mobile phones or other small devices. It also means programs you run or write on your computer are probably 32 or 64 bit and designed for Intel or AMD processors. ARM is a completely different architecture, so we can’t upload something to it and expect it to run. Programs have to be designed for it.

Furthermore a lot of “stock” Pi operating system images are Linux based so it can be difficult to write code that interfaces with .NET or Windows-based services. Not that you can’t; you can certainly write bash scripts that make wget or curl requests.

Based on my experiences at least, nothing with Windows IoT was really that easy…  This is an intro post with a shopping list attached, so I am looking forward to Drew’s making everybody’s lives easier on a budget of $98.

Comments closed

New Samsung Drives

Joe Chang looks at the Samsung 960 PRO SSD:

All the previous PCI-E x4 gen3 NVMe SSDs were rated between 2,000-2,500MB/s in large block read. The 960 Pro is rated for 3,500MB/s read. This is pretty much the maximum possible bandwidth for PCI-E x4 gen3. Each PCI-E gen3 lane is 8Gbit/s, but the realizable bandwidth is less. In earlier generation products, an upper bound of 800MB/s realizable per 8Gbit/s nominal signaling rate was typical.

Presumably there was a reason why every PCI-E x4 was in the 2000-2500MB/s bandwidth. It could be that these were 8-channel controllers and the NAND interface was 333-400MB/s. Even though 8 x 400MB/s = 3,200MB/s, it is expected that excess bandwidth is necessary on the downstream side. The could be other reasons as well, perhaps the DRAM for caching NAND meta-data. Intel had an 18-channel controller, which produced 2,400MB/s in the P750 line, and 2,800MB/s in the P3x00 line.

If you’re looking at a test lab server, this might be a good disk for you.

Comments closed

Speeding Up Database Restores

Paul Popovich gives tips on how to speed up restores from a DataDomain device:

Recently we had to restore our 5TB prod database to rebuild an AG node. Here’s some things we learned along the way to hopefully help you speed up your restores.

Backup your VLDB to multiple files. We found 12 to be the sweet spot in our setup. Make sure you’re going to 10gig NICs on both ends of the transfer.

In terms of folder directories on that thing we learned to go wide and small. Let me explain. Our setup is this:
\\datadomain\sql\sqlbu\\\Full or Tlog

Click through for more tips.  This is independent of optimizing your restore scripts themselves.

Comments closed

Memory And Storage

Glenn Berry has a nice post on current and near-future memory and storage technologies:

Over the past couple of years, we have seen the introduction and increasing usage of NVM Express (NVMe) PCIe SSDs based on existing NAND flash technology. These typically have latencies in the 50-100 microsecond range. They also use the newer, much more efficient NVMe protocol and the PCIe interface, giving much better performance than older SAS/SATA SSDs using the old AHCI protocol.

Currently, Hewlett Packard Enterprise (HPE) is selling 8GB NVDIMM modules for their HPE Proliant DL360 Gen9 servers and HPE Proliant DL380 Gen9 servers. These modules have 8GB of DRAM which is backed by 8GB of flash for $899.00, which is pretty expensive per gigabyte. These two-socket servers have 24 memory slots that each support up to 128GB traditional DDR4 DIMMs. Any slots you use for NVDIMM modules won’t be available for regular memory usage. You can use a maximum of 16 memory slots for NVDIMM usage, which gives you a maximum capacity of 128GB. You must use Intel Xeon E5-2600 v4 series processors in order to get official NVDIMM support. Micron is scheduled to release larger capacity 16GB NVDIMMs in October of 2016.

Read the whole thing.

Comments closed

Throwing Hardware At The Problem

Erik Darling says get more RAM:

I’m not saying you need a 1:1 relationship between data and memory all the time, but if you’re not caching the stuff users are, you know, using, in an efficient way, you may wanna think about your strategy here.

  • Option 1: Buy some more RAM
  • Option 2: Buy an all flash array

You’ll still need to blow some development time on tuning queries and indexes, but hardware can usually bridge the gap if things are already critical.

Looking at hardware is a reasonable approach.  The best bet is to satisfy the most pressing need at the margin.  Sometimes that means more (or better) hardware, sometimes it means tuning queries, and sometimes it means application-level changes to retrieve data differently.

Comments closed

Hadoop: DAS Or NAS?

Jagdish Mirani asks whether you should prefer Direct Attached Storage (DAS) or Network Attached Storage (NAS) for your Hadoop cluster:

If you want to spin up an Apache Hadoop cluster, you need to grapple with the question of how to attach your disks. Historically, this decision has favored direct attached storage (DAS). This approach is in keeping with the fundamental Hadoop principle of moving processing to a where the data lives, thereby taking advantage of disk locality to optimize performance. Disk locality is so core to Hadoop that virtually any description of Hadoop starts with this.

The alternative is to use network attached storage (NAS). In contrast to DAS, NAS separates the compute and storage layers so that storage can be shared across a number of servers by shipping data over the network. Historically, this heavy dependence on the network made NAS an order of magnitude slower. Remember, the state of the art was 1GbE networks, and switches were slower and more expensive. I/O requirements for demanding Hadoop-based applications could only be met by DAS.

This is a very interesting discussion.  In my limited experience, I’ve had trouble selling operations teams on DAS, given the increased ops effort required to keep a bunch of attached disks going.  Hat tip Ari Amster.

Comments closed

Azure Iaas Disk Differences

Rolf Tesmer looks at whether to use SSDs or Premium Disk for SQL Server instances on Azure IaaS:

To test the throughput I will run a set test with the TempDB database on D:\ (local SSD) and then rerun the test again with the TempDB moved onto F:\ (P30 premium disk).  Between the tests SQL Server is restarted so we’re starting with a clean cache and state.

The test SQL script will create a temporary table and then run a series of insert, update, select and delete queries against that table.  We’ll then capture and record the statistics and time.

The results were interesting; read on to learn more.

Comments closed

Home Labs

Chrissy LeMaire shows off her home lab:

I like to test my scripts against a variety of versions/editions and I don’t like spinning VMs up and down all the time. As for the cost; some people spend their money on golf, Polish pottery and gaming rigs. I spend mine on servers, Belgian beer and travel 😉

As you can see, I also have an old Macbook Pro with 256 SSD, 4TB HDD and 8GB RAM in the mix. It’s for photos and videos, however. And someone gave me an old silver Shuttle from like 2002, but I haven’t had the time to set it up yet.

The “cloud versus local” lab is a tough call, as both sides have their advantages and disadvantages.

Comments closed

Getting Physical

Denny Cherry explains the value of his having physical hardware on hand:

Now you may be asking why don’t we just do all this in Azure? And we could, but the reason we didn’t is pretty straight forward. Cost. Building tons of VMs in Azure and leaving them running for a few weeks for customers can cost a decent amount pretty quickly, even with smaller VMs. Here our cost is fixed. As long as we don’t need another power circuit (we can probably triple the number of servers before that becomes an issue) the cost is fixed. And if we need more power that’s not all that much per month to add on.

All and all, this will make a really nice resource for our customers to take advantage of, and give us a place to play with whatever we want without spending anything.

Ah, the life-long struggle between cap-x and op-x…

Comments closed