Ludicrous speed

April 7, 2017 Leave a comment

Just a quick post for some storage bragging rights ๐Ÿ™‚

I’ve upgraded my home PC with a pair of SanDisk FusionIO ioScale 410GB PCIe flash adapters. And I thought SSD was fast ๐Ÿ™‚

ludicrous_speed

Here’s a quick benchmark using the Disk utility in Ubuntu 16.04.

fioa

For comparison, here the same but for a SanDisk 256GB SSD.

sda

And a comparison to an old 1TB SATA drive that I’ve still got running.

sdd

Finally, here a benchmark on a software RAID-5 array, running on three old 300GB SATA drives.

md0

The only downside is that the drivers for the FusionIO cards aren’t in the mainline Linux kernel, so I need to recompile them whenever I upgrade the kernel version.

Categories: Uncategorized

Dock in the Box

April 12, 2016 Leave a comment

pi_armed_with_docker

I finally got around to re-imaging my Raspberry Pi cluster to run Docker. I’ve used the Hypriot image, which worked flawlessly. And retrofitted some scripts to manage soft power control for the cluster nodes via the ATXRaspi board.

# docker-machine ls
NAME ACTIVE DRIVER STATE URL SWARM
dspi0 generic Running tcp://x.y.z.0:2376 dspi0 (master)
dspi1 generic Running tcp://x.y.z.1:2376 dspi0
dspi2 generic Running tcp://x.y.x.2:2376 dspi0
dspi3 generic Running tcp://x.y.x.3:2376 dspi0
dspi4 generic Running tcp://x.y.z.4:2376 dspi0
dspi5 generic Running tcp://x.y.z.5:2376 dspi0
dspi6 generic Running tcp://x.y.z.6:2376 dspi0

Next I need to figure out what this Cetacean cluster can do, and how to do it…

Kernel Version: 4.1.17-hypriotos-v7+
Operating System: linux
Architecture: arm
CPUs: 28
Total Memory: 6.984 GiB

I do want to get Ceph running on the cluster again, but will probably run that directly on the base Linux OS rather than in Docker. It can then present storage to the various Docker containers.

Finally, for no particular reason, I cut up a heatsink from an old video card, and stuck them onto the Pi nodes.

Categories: Docker, Raspberry Pi

Napkin notes

April 5, 2016 Leave a comment

Find disk capacity used by files older than a given date:

find . -type f -atime +30 -print0 2>/dev/null| du -hc --files0-from - | \
tail -n 1

And wrap it in a loop to show capacity by various dates:

for a in 30 90 180 365 730 1460 11680; do \
echo -n "Files older than $a days = "; \
find . -type f -atime +$a -print0 2>/dev/null| \
du -hc --files0-from - | tail -n 1; done

 

Categories: Uncategorized

Back to school for OpenStack training

July 17, 2015 Leave a comment

This week I’ve been fortunate enough to attend a four day Red Hat OpenStack Implementation course.

I’d recommend this course to anyone interested in developing their skills and familiarity with Red Hat’s OpenStack distribution. I found the course delivery and materials to be excellent, and labs, lots and lots of lab exercises. Manually installing and configuring all the common services; Keystone, Glance, Nova, Neutron, Cinder, Heat, Swift, Ceilometer, Horizon, as well as Gluster, RabbitMQ, and MySQL.

Below is a gratuitous screen shot of the completed labs – a small victory that was made hollow by then seeing PackStack automate a similar deployment in about 30 minutes.

RH-OS course

Now to get it all deployed on my Pi cluster at home ๐Ÿ™‚

Categories: OpenStack

A bucket of squid

June 10, 2015 5 comments

Since my last post I’ve been tinkering with my little Pi cluster a bit, but mostly trying to sort out a better case and power supply. Although I do like the Lego aesthetic it’s far too likely to be randomly disassembled by my kids! The power supply was also an issue, as the little USB powered hub only had a 3A supply and was struggling with six Pis.

So I’ve rehoused the cluster into the case shown below, and I think now have a (still WIP) good solution for the power supply. The picture shows five Pi 2 B cluster nodes (plus one Pi B+ for cluster control), though I’m planning to add a sixth Pi 2 B cluster node on the left hand side. It’s hard to see, but there’s also an eight port Ethernet switch under the Pis.

## See the bottom of this post for updates ##

Pi chassisUnfortunately the case was damaged in transit, and is now held together with super glue! To their credit the vendor quickly provided a full refund, and I’m ordering a replacement. Until that arrives I’m using the damaged case as a proof of concept.

Pi casePi caseApart from the damage in transit, I’m very happy with the case. There was a 120mm fan under the top vent, but I removed that and the system remains cool-ish with only passive air flow.

For the power supply I’m using four quad-USB car charging adapters, all connected to a laptop power supply. You can see them in the top left and right of the case photos. The USB charging adapters can tolerate between 8-20V and output 5V and 6A, 30W, across four USB ports each. Perfect for my small Pi cluster, and with the potential for an in-car mobile Pi cluster (just kidding!).

USB powerCurrently the power supply is still a bit primitive, with no power switch or soft power down option. So I need to manually shut down all the cluster nodes before disconnecting the power. Which brings me to the WIP section…

I’m planning to use an ATXRaspi board to manage power to the Pi control node, and modify the startup/shutdown process to then manage power to the six Pi cluster nodes. The control node will connect via GPIO to a solid state relay to switch the cluster power on and off. I’m considering using two SSRs so that I can selectively fail half of the cluster nodes.

Pi case power planPower On

  1. Connect mains power, no power to RPi nodes (option to continuously power Ethernet switch)
  2. Press ATXRaspi switch, enable power to RPi control node (and Ethernet switch) and trigger startup script
    1. Startup script activates the DC relay and enables power to the RPi cluster nodes

Power Off

  1. Press ATXRaspi switch, this triggers shutdown script on RPi control node
    1. Shutdown script executes remote shutdown of RPi Cluster nodes via SSH
    2. Shutdown script deactivates the DC relay and powers off the RPi cluster nodes
    3. Shutdown script completes the shutdown and powers off of the RPi control node
      (and Ethernet switch)
  2. Disconnect mains power

I’ll update next when the parts arrive and I see if this actually works… ๐Ÿ™‚

## Update 1 – It Lives! ##

I found some time this evening to pull the cluster apart and test the ATXRaspi and relays that arrived last week.

Looks good! Button control works great and the control node startup (/etc/rc.local) triggers the relays via the GPIO connections.

Testing ATXRaspiNow to put it all back together and test the cluster node shutdown scripts.

## Update 2 – Back in the box ##

Here are some photo’s of the reassembled cluster, now with working soft power on/off controls.

Pi cluster

Front top view – showing power button and ATXRaspi (lower right side)

Pi cluster

Front left view – showing the USB power adapters that are controlled by the relays

Pi Cluster

Front right view – showing the power control relays

Ceph is running nicely, and I’ve updated the CRUSH map to split the pg copies between the odd/even nodes, as they hang off different USB power supplies.

ceph_osd_treeAnd some glamour shots ๐Ÿ™‚

Pi ClusterPi ClusterPi ClusterI’ve also uploaded the pictures here – http://imgur.com/a/0yKqu#0

Categories: ceph, Raspberry Pi Tags:

My little Cephalopony

April 9, 2015 Leave a comment

In order to teach myself a bit about Ceph, I’d like to introduce my modestly scalable, low performance, gigabyte-scale Raspberry Pi storage cluster!

Ceph Pi clusterThat’s five Raspberry Pi v2, each with the standard quad-core ARM Cortex-A7 CPU (over-clocked to 1 GHz), 1 GB memory, and four 8 GB USB memory sticks for storage. When I was a lad that was a lot…

Installation was embarrassingly easy, mostly following instructions from here, though I used Minibian as the OS instead of Raspbian. Along the way I also discovered the Mosh shell, which makes remote connection over varied, slow, and intermittent networks tolerable.

Obviously this is an educational build and not intended for serious use, though it’s already clear that 8 GB USB sticks are too small. Their appeal was low cost and the “Christmas tree” effect of so many blinkenlights.

ceph@piceph1:~$ ceph -s
cluster 661a43a5-e9cc-4350-851d-109d07c44773
health HEALTH_OK
monmap e1: 1 mons at {piceph1=xxx.xxx.xxx.xxx:6789/0}, election epoch 2, quorum 0 piceph1
osdmap e24: 5 osds: 5 up, 5 in
pgmap v49: 192 pgs, 3 pools, 0 bytes data, 0 objects
25694 MB used, 9240 MB / 36923 MB avail
192 active+clean

I’ll update with more detail as I figure out just what to do with this “data centre in a lunch box” ๐Ÿ™‚

Categories: ceph, Raspberry Pi

Custard On Top

September 24, 2013 Leave a comment

I’ve started a new IBM Redbook residency this week. A bit different to those that I’ve previously worked on, as work and family commitments don’t allow for travel. So I’ll be working remotely, and part-time, from Australia while the rest of the team are in Mainz, Germany.

The aim is to produce a new Redbook on IBM N Series Clustered Data ONTAP (cDOT), and to update the existing N Series hardware and software guides. I was also on the team that produced the previous update to the N Series hardware and software guides.

I’d describe myself as very experienced with Data ONTAP 7 (or DOT 8 in 7-mode). I was first NCDA certified in 2005, and have since collected an embarrassing array of NetApp certs. I also worked at NetApp as a technical instructor, and wrote the original NCDA certification study guide. But so far my experience with Clustered mode has been minimal.

For me, this is a great opportunity to keep my technical skills current, whilst contributing my experience and new knowledge back to the book.

I’ll update this post over the next few weeks to document my experiences.

Week 1 – Back to school

The first week was spent in a Clustered ONTAP 8.2 Administration course, to get us all up to speed with the new features. The course was presented via WebEx in a “virtual live” format. The WebEx format worked well, with a very knowledgeable instructor, and plenty of opportunity to ask questions.

The gotcha for me, is that the course was scheduled for Central European time, since that’s where the rest of the residents are located. That’s 5pm-1am here in Melbourne, Australia. With my day job, and family too, this made for a _very_ long week.

My first impressions of cDOT are very positive, with much of the traditional ONTAP goodness still familiar, though enhanced in many ways. At the same time, the change from Active/Active ‘Clustered’ controllers, to a scalable cluster that is built from ‘HA Pairs’ has altered the behaviour of many features.

I felt a bit like a wide-eyed Dorothy in the Wizard of Oz – we’re not in Kansas anymore!

Toto, I've got a feeling we're not in Kansas anymore

Here’s a quick list of some changes in Cluster mode:

  • Clustered ‘HA Pairs’ replace the traditional dual Active/Active controller design.
  • Fail-over within a HA Pair is improved by using Storage Fail Over (SFO) instead of Cluster Fail Over (CFO).
  • Different resource types, such as aggregates, volumes, and network ports & interfaces,ย  are now bound either to physical nodes, vservers, or are cluster-wide.
  • Vservers, otherwise known as Storage Virtual Machines (SVM), replace the vfiler Multistore feature.
  • Whereas a 7-mode Virtual Interface (VIF) is bound to a physical node, its replacement, a Logical Interface (LIF), is bound to a logical vserver (cluster-wide).
  • SnapMirror loses its Qtree and Synchronous modes, but gain several other features, such as LoadSharing mirrors
  • SnapVault is now based on Volume SnapMirror, so retains the storage efficiency features of the source (deduplication and compression)
  • Some features from 7-mode are not available, but may be expected in future versions. (e.g. SnapLock, MetroCluster, SyncMirror, etc).

There are many more that I’ve not mentioned in the above list, but for those you’ll need to read the book (or at least the updates to this post ๐Ÿ™‚

Week 2 – Meet the team

It’s only the start of the week, but here’s a screen shot of the kick-off meeting. We used Google Hangouts as a good way for the team to ‘meet’.

My fellow residents are (from left to right): Christian Fey, Danny Yang, Michal Klimes, Roland Tretau (leader, main screen), myself, and Tom Provost.

Roland TretauResidents

This meeting was a brain storming session to create a draft Table of Contents for the book, main parts, chapter topics, etc. Once the ToC is agreed we’ll assign different chapters to each team member to complete.

I’m going toย start on the updates to the existing H/W guide and 7-mode S/W guide. This is a good fit for working remotely as it’s separate from the Cluster mode book, though there will be common sections that should be easy to adapt. Once I’ve finished updating the existing books I’ll rejoin the rest of the team.

All Redbooks are developed in Adobe FrameMaker, so one of the first tasks for any resident is to install the FM software and the ITSO custom toolkit. The ITSO have developed an applet called QAM (Quick Access Menu) that automates many book creation tasks. This also ensures that the authors, who are often new to FrameMaker, always follow the ITSO style.

Mock up of the cDOT cover

By the end of the week I’ve merged the contents of both the h/w and s/w guides into the new book files, and have started reviewing new material for inclusion.

The main challenge so far hasn’t been working remotely, but in trying to find time to work on the book part-time. I’m still doing my day job too, and anyone who’s worked at IBM knows that already requires 100% effort!

Week 3

Future…

Week 4

Future…

Week 5

Future…

Week 6

Future…

Categories: Storage, Training