Skip to content
September 30, 2017 / lasseathome

Upgrading the HDD of a Mac Mini (mid 2011) from 500 GB to 2 TB

I ran out of HDD space on my 500 GB Mac Mini while backing up my iPhone and 256 GB iPad so I needed more space and did an upgrade. For the physical switch there were several excellent guides with photos, but there were very few instructions about how to grow the HFS+ partitions of the drive, so I had to test and find a solution and this blog post documents my steps in the hope that it can be useful for someone else. For the hardware part I used the ifixit guide, for mid 2011, the steps are the same for 2012 models, but 2014 models have a new internal design.

Disk removal

Disk removal is pretty straight forward if you have some small Torx tools. In the ifixit guide there were some steps that I could skip, these were steps 13-15, disconnecting the IR sensor and extracting the logic board out of the case (that requires a special Mac Mini Logic Board Removal Tool), I could get the HDD out without these steps. The ifixit guide has many good images, in plain text the steps are as follows ((steps X,Y) below refers to the steps in the ifixit guide):

  1. Remove Bottom cover (steps 1,2).
  2. Remove the Fan with the two T6 Torx screws and remove the connector (steps 3-5).
  3. Remove the Cowling  with one T6 Torx and gently pry it out (steps 6,7).
  4. Remove the Antenna Plate with two T8 Torx that hold the HDD and two T8 Torx that hold the plate itself. Lift the antenna plate and disconnect the antenna cable (steps 8-11).
  5. Disconnect the integrated power and SATA connector from the Logic board (step 12).
  6. I did not need to extract the Logic board out of the case so i skipped steps 13-15, I could wiggle the HD out of its place without doing this. If you have two HDDs mounted you need to extract the logic board to get access to the lower HDD, but this was not necessary for me.
  7. Remove the HDD from the case and remove the two place holder screws that fixes the disk to the case (steps 17-19).

To get it all together it is just needed to reverse the order. There was one thing that took me a little time to figure out, the place holder screws on the HDD should fit into the black rubber holes closest to the bottom of the case, see the red arrows in the picture below. Originally I thought they should go into the other two bigger metal holes but I could not get the disk in place without removing the Logic Board (these are where a second disk should be placed). To get a second disk in place at that “top position” in the case one needs to extract the Logic board out of the case which gives more space to wiggle a disk into place.

Clone Mac HDD and grow the partition

Now I come to the difficult part where there were no simple guides, so I had to Google and experiment my way to a solution. What I ended up using was a linux computer with two SATA connections with the dd and GParted programs, a Mac and a USB HDD case with the diskutil and Disk Utility programs.

The core of the problem is that it is difficult to grow a Mac HFS+ partition since the original disk has a size of the partition table that is limited to the original size of the disk so that Mac “diskutil resizeVolume ...” gives the error message “Error: -5341: MediaKit reports partition (map) too small“. Research shows that there are some alternatives, one is to destroy and rebuild a sufficiently large partition table with the Mac tool GPT and then resize the disk with “diskutil resizeVolume /dev/disk2s2 R” which fills the partition, I failed this once and ended up erasing the disk, due to a mistake on my side. So I had to start over making a new clone with dd and then went on another venue aiming at using “diskutil mergePartitions” which worked nicely.

So the steps taken to clone the disk and grow it to get one single large partition on the 2TB disk were the following:

  1. Insert the 500 GB and 2 TB disks into a Linux computer. Identify the disks with the lsblk command.
    $ lsblk
    NAME MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
    sdb    8:16   0 465,8G  0 disk 
    ├─sdb1 8:17   0   200M  0 part 
    ├─sdb2 8:18   0   465G  0 part 
    └─sdb3 8:19   0 619,9M  0 part 
    sdd    8:48   0   1,8T  0 disk

    Here I have removed the output for my other disks (sda and sdc), to avoid cluttering up the blog. The old mac disk sits at /dev/sdb and has three partitions. The new one is at /dev/sdd/ and it does not have any partitions.

  2. Then I do a complete disk clone using the dd disk copy command, which is very powerful and should thus be used with care, the command is so powerful that it has earned the name disk destroyer.
    $ sudo dd if=/dev/sdb of=/dev/sdd bs=1M
    500107862016 bytes (500 GB, 466 GiB) copied, 7727,8 s, 64,7 MB/s
    

    it took about 2 hours 9 minutes for dd to complete the cloning even though I used SATA connections in a fairly fast computer. (This could be done using USB HDD enclosures too but then the rate would be limited by the USB data transfer rate). The new disk now has the three original partitions and 1,4 TB free space at the end. The partitions are: sdd1 – the EFI boot partion, sdd2 – the Macintosh HD (main hard disk), and sdd3 – the Recovery HD.

  3. Then I opened GParted that shows the partitions and their names together with the unallocated space.gparted
  4. I now need to move the Recovery HD to the end of the disk and insert a new partition in the free space. This was easily done in GParted, but for some reason I got a tiny unallocated space at the end, but it was so small (close to 1ppm of the disk) that I didn’t bother with it. The new partition I created was named Customer2 and labelled Macintosh HD2.gparted2
  5. I now had the partitions on the disk finished and could look at the disk contents with lsblk, that shows that my third partition had ended up at sdd4 even though that it is physically placed between sdd2 and sdd3 (this is also seen in GParted above).
    $ lsblk
    NAME MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT 
    sdd     8:48  0   1,8T  0 disk 
    ├─sdd1  8:49  0   200M  0 part 
    ├─sdd2  8:50  0   465G  0 part 
    ├─sdd3  8:51  0 619,7M  0 part 
    └─sdd4  8:52  0   1,4T  0 part
  6. GParted could not resize the HFS+ partitions so I disconnected the drive from the Linux Machine and inserted it in a USB HDD enclosure that I connected to my Macbook, where I could use the Mac diskutil to merge the partions. The nice thing is that diskutil mergePartitions can create a large enough partition table when it has two partitions to work with. The command and its output were:
    $ diskutil mergePartitions JHFS+ 'Macintosh HD' /dev/disk2s2 /dev/disk2s4
    Merging partitions into a new partition
     Start partition: disk2s2 Macintosh HD
     Finish partition: disk2s4 Macintosh HD2
    Started partitioning on disk2
    Merging partitions
    Waiting for partitions to activate
    Growing disk
    Finished partitioning on disk2
    /dev/disk2 (external, physical):
     #: TYPE NAME SIZE IDENTIFIER
     0: GUID_partition_scheme *2.0 TB disk2
     1: EFI EFI 209.7 MB disk2s1
     2: Apple_HFS Macintosh HD 2.0 TB disk2s2
     3: Apple_Boot Recovery HD 649.8 MB disk2s3

    This operation was very fast some few seconds since it didn’t have to move a lot of data.

  7. Then I opened the graphical user interface Disk Utility and performed a First Aid check on the disk which made some small corrections to the disk and partitions automatically.
  8. Then I removed the disk from the USB HDD enclosure and mounted the new disk in the Mac Mini (reversing the order above) and it booted wonderfully. The machine runs fine and I have not noticed any data loss, writing this blog from the machine. I still keep the old HDD as backup in the book shelf just in case something happens. So, I now have a 2 TB HDD (hybrid) in my Mac Mini which gives me some space for the future and the hybrid technology also seems to give me improved performance.

Concluding Remarks and Summary

Hope this can help some one else that has the same problem of growing a partition on a Mac HDD where the size of the partition table is the limiting factor. The trick is to create a new large partition and use diskutil mergePartitions. The whole process could probably be performed on a Mac. In steps 2 & 3 below I used GParted mainly since I have access to it and it is easy to use and I am comfortable with it. In step 5 i used the graphical interface Disk Utility since it is also simple to use. In summary the steps for cloning and growing a Mac HD are:

  1. Clone disk with dd.
  2. Move the Recovery HD to the end of the disk.
  3. Add a new HFS+ partition.
  4. Merge the two HFS+ partitions using diskutil.
  5. Check the disk so that it is clean.
Advertisements
January 24, 2017 / lasseathome

Writing raspbian image to SD card using Mac

The commands differs slightly between Linux and Mac and this is the procedure that I followed to get a Raspbian image transferred to an SD card using my Mac.

  1. Download the image from Raspberry Pi.
  2. Inserted the SD card in the SC slot and identified the disk with the mount command, to be at /dev/disk2s1.
  3. Unmounted the SD card so that a new image can be written to the card.
    diskutil unmount /dev/disk2s1
  4. Wrote the image to the card using dd, with the last letter and digit removed.
    sudo dd bs=1m if=Downloads/2017-01-11-raspbian-jessie.img of=/dev/disk2
  5. Inserted the card and booted up the RPi machine…
December 13, 2016 / lasseathome

Installing an SVN server on an ASUS Router

I want to have a low power device that can always be connected and  accessible from anywhere on the internet, so I can work, version control and store my software projects while being home or traveling. The most convenient solution I have found for my needs is to use the router that I have as firewall with a USB disk attached to it, and put an SVN server on it. This collects my procedure and commands used for installing an SVN server on my ASUS RT-AC88N router, the router is sufficiently powerful for serving as a server and with the attached USB disk it has enough storage for the SVN data and other things. The most important thing for me is later how I can configure the system so that it allows multiple users with access rights to different repositories.

Pre-requisites for the installation

The things I have is the ASUS router and an external USB 3.0 disk that will go as NAS and storage space for the SVN server, the prerequisites I have is to install Asuswrt-Merlin which is a custom firmware for ASUS routers that is Linux-hacking friendly then add the modern optware software called Entware and its latest version called Entware-ng. The preparatory steps for my installation are:

  1. Change the default username from admin to something else to increase security.
  2. Install the Asuswrt-Merlin, it is as easy as download the binary and upload it to the router.
  3. Format the USB disk to EXT4 I did it on a separate Linux machine named it NAS and connect it to the router. A USB 3.0 disk in the USB 3.0 port does not work on the ASUS router due to the fact that an old kernel (2.6.36.4brcmarm with a known kernel bug for high speed USB) is used on the router (it is corrected in 2.6.37-rc5 and later) according to ubuntu forums. To get my disk working I have to connect the USB 3.0 disk to the USB 2.0 port. It is possible to connect a USB 2.0 device to the USB 3.0 port since it does not excite the bug, I am waiting for a new kernel to be compiled and available so that I can enjoy high speed USB (even though 2.0 is enough for my needs right now).
  4. Enable ssh-login and JFFS scripts in the Asuswrt-Merlin web interface menu Administration-System.
  5. Login to the router with ssh and check that the drive is correctly mounted at /tmp/mnt/NAS/.
  6. Then run the Entware installation script entware-setup.sh that is included in the Merlin distribution. This installs currently installs the newer Entware-ng.
  7. Now I can start working with the entware system and start to install my favourite Unix tools, by using the Entware package manager opkg.
    opkg install gzip bzip2 nano rsync tar
  8. This finishes the prerequisites for me. Next up is to install and configure my subversion server.

Installation of Subversion

I only need to have access to the SVN repository through the svn:// and ssh+svn:// protocols. The steps are basically to install the svn-server and ssh and open the ports on the router:

  1. Install svn server from Entware:
    opkg install subversion-server
  2. Make a repository file on the NAS disk
     cd /tmp/mnt/NAS; mkdir svn; cd svn; mkdir theProject;
  3. Create the repository and edit the svn configuration file
    svnadmin create /tmp/mnt/NAS/svn/theProject
    cd theProject/conf
    nano svnserve.conf
  4. In here we disable the anonymous access and set a link to the password file
    anon-access = none
    password-db = passed
  5. Then we edit the password file  with nano passwd.
    [users]
    username1 = password1
    username2 = password2
  6. Then it is time to start the svnserver option -d to demonize and -r for repository.
    svnserve -d -r /tmp/mnt/NAS/svn/theProject

    This command is put into the /jffs/scripts/post-mount so that it is executed at startup or post mounting of the USB disk.

  7. Finally we need to open the svnserve port on the firewall.
    iptables -I INPUT -p tcp --dport 3690 -j ACCEPT

    This is also put in the /jffs/scripts/post-mount file so that it is executed when the disk is ready.

  8. Now it should be up and running, and we can list the contents by:
    svn list svn://addressToRouter/theProject/

This finishes the SVN installation on the ASUS router. The ssh+svn access is not yet tried

November 24, 2016 / lasseathome

Formatting a drive bigger than 2TB in Linux

I recently installed a big disk into my calculation server but could not format it with the gparted GUI. The main problem is that gparted and other tools use fdisk that cannot format drives larger than 2TB, and I wanted to add a 3 TB disk to my calculation server to store data and development code. I had to Google some solutions and found that this can be overcome by using GNU parted command using a partition table of type Intel EFI/GPT. Where the latter comes from the globally unique identifier (GUID) for the EFI System partition and uses information stored in the GUID Partition Table (GPT). EFI uses GPT where BIOS uses a Master Boot Record (MBR). There are plenty of other posts that contain the same essence, but I want to keep some commands readily available for myself so I once again post the commands I have used and want to remember, in the hope that others might find them useful.

The essence of the procedure is to identify the drive letter with lsblk, run parted to create a gpt table and a primary partition, format the drive with mkfs.ext4, and find the GUID with blkid so it can be stored in the /etc/fstab for automatic mounting. The command sequence is:

$ lsblk     # find drive name
$ sudo parted /dev/sdc # start parted
# the rest is in parted's interactive shell
(parted) mklabel gpt
(parted) unit TB
(parted) mkpart primary 0.0TB 2.7TB
(parted) quit
$ sudo mkfs.ext4 /dev/sdc1
$ sudo tune2fs -m 1 /dev/sdc1
$ sudo blkid     # Identify the drive GUID
$ sudo mkdir /Data
$ sudo nano /etc/fstab
# In fstab place the line
UUID=d18b6e08-d82d-4581-8445-8a463e1043fd /Data ext4 defaults 1 2
$ mount /Data

Now the 3TB drive is mounted in data, and I have a sandbox to play in.

July 24, 2015 / lasseathome

Avoid Killing SD Card on Raspberry Pi

Having read that there is a risk of killing the SD card with a lot of writes caused me to set up a system so that it writes the big log files and data to a normal hard drive while keeping the regular and important system information on the SD card. The reason is that I have a temperature logger based on Telldus Tellstick Duo that listens to sensor events on the 433 MHz band and writes temperature log data several times a minute. This could, according to reports and discussions on the internet, cause a burn out of the solid state card, I do not know if the risk is real for my case but I often reason it’s better to be safe than sorry. I did it three years ago on my first RPi but did not write down the procedure so here I go again documenting for anyone who is interested at the same time as I do it for my new RPi 2B. This is the hardware that I have:

  1. Rapberry Pi 2 B.
  2. External USB harddrive.
  3. USB hub with power adapter.
  4. Telldus Tellstick Duo (not important for this).
  5. A 7-port USB hub with power supply 5V 2.5A that powers the Raspberry Pi, Tellstick Duo, and USB hard drive.
  6. For completeness I also have a WiFi network adapter DWA-121 and wireless integrated keyboard/trackpad the dogles for these are connected directly to the RPi (not important for this).
  7. A separate laptop with Ubuntu Mate with SD card reader and USB connectors.

The additional USB hub (5) with external power source was necessary to get enough power for the USB hard drive to startup.

Starting up

Downloaded Raspbian Jessie via torrent and wrote it to the card using the Ubuntu laptop. Identified the device for the SD card with df and removed the additional letters and wrote the image with dd

df -h
# output ... 
# /dev/mmcblk0p3   27M  444K   25M   2% /media/SETTINGS
# /dev/mmcblk0p6  6,3G  2,6G  3,5G  43% /media/root
# /dev/mmcblk0p5   60M   19M   42M  32% /media/boot1
sudo dd bs=4M if=2015-11-21-raspbian-jessie.img of=/dev/mmcblk0

Booted up the RPi. The first and most important thing: Added password to user pi.
Then: Changed locale, changed keyboard, grew the file system. Reboot, run an update and an upgrade, and installed emacs and prepared the keyboard for swap Caps & Ctrl.

sudo apt-get update
sudo apt-get upgrade
sudo apt-get install emacs
cd /etc/default/
sudo nano keyboard
# added options XKBOPTIONS="...,ctrl:swapcaps"

Now I have a base system that will give me a basic working configuration so I can start working with the file system.

Placing /var/ on the external USB disk

Now it is time to get down to the more serious business that can do damage to the system so I work with the file system off-line on a separate machine to it up. I shut down the RPi and inserted the USB HDD drive and SD card in the Linux Laptop. In short the core idea is to place all the data in /var on the USB HDD. This is done with the motive that the files that change the most are placed in the /var/ area in unix systems. For example the log files, the www files, the database data, etc. Therefore I intend to place the USB HDD there, with the intention to have all my file sharing and data stored there. So I attached the USB HDD to the Linux machine and did the following.

  1. Formatted the HDD to an ext4 system using gparted on the linux machine, and gave it label VAR.
  2. Went to the directory of the memory card that contain the root partition and into /SDCardMountPath/etc/
  3. Added a line to the fstab file, extracted the UUID of the disk using blkid.
    sudo emacs fstab&
    blkid
    #output
    # ...
    # /dev/sdb1: LABEL="VAR" UUID="87f1540c-dac0-4911-98c1-1a23666cafed" TYPE="ext4" PARTUUID="439c27a4-01"
    # the resulting line for /etc/fstab was then
    # UUID=87f1540c-dac0-4911-98c1-1a23666cafed /var               ext4    errors=remount-ro 0       1
    
  4. Moved all data from the card’s /var/ directory to the HDD
    cd /SDCardMountPath/var/
    sudo mv * /HDDMountPath/

    This took some time to complete.

  5. Double checked that the /SDCardMountPath/var/ directory was empty after the move so that a file system can be mounted there at next RPi boot.
  6. Ran sync;sync to sync the filesystems and unmounted and removed them from the Laptop.
  7. Inserted the card in the RPi and the USB HDD in the USB hub. Booted up the RPi. At boot an fschk was run and all worked fine so the RPi 2 B is up with the HDD mounted at /var/, ready to be filled.

Now I have a system where /var/ is stored on a HDD that can tolerate more writes than an SSD disk can, so I can safely log Gigabytes of data. For curiosity I can inform the interested reader that I started logging with my first Raspberry Pi 1 B, with a similar setup to an USB Disk, in August 2013 and the data is now in January 2016, 640 MB, and there is not yet any problem with the RPi and SC card.

July 23, 2015 / lasseathome

Configuring WEP2 based WiFi network from command line on Raspberry Pi

This is what I did to get the WiFi config to work for my raspberry pi. I have a USB network card that is OK according to http://elinux.org/RPi_USB_Wi-Fi_Adapters and I have a D-LINK DWA-121 USB dongle which should work out of the box with my Raspberry Pi  and Raspberry Pi 2 B. I insert into the system and double check that all is OK with the detection:

pi@RPi-2B ~ $ lsusb
 Bus 001 Device 002: ID 0424:9514 Standard Microsystems Corp.
 Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
 Bus 001 Device 003: ID 0424:ec00 Standard Microsystems Corp.
 Bus 001 Device 004: ID 2001:3308 D-Link Corp. DWA-121 802.11n Wireless N 150 Pico Adapter [Realtek RTL8188CUS]
 Bus 001 Device 005: ID 1d57:32da Xenta 2.4GHz Receiver (Keyboard and Mouse)

At row 4 it is seen that this dongle is detected OK. Then I check that the basic network is OK to be configured by analyzing the “interfaces” file.

pi@RPi-2B ~ $ cat /etc/network/interfaces
auto lo
iface lo inet loopback

auto eth0
allow-hotplug eth0
iface eth0 inet manual

auto wlan0
allow-hotplug wlan0
iface wlan0 inet manual
wpa-conf /etc/wpa_supplicant/wpa_supplicant.conf

auto wlan1
allow-hotplug wlan1
iface wlan1 inet manual
wpa-conf /etc/wpa_supplicant/wpa_supplicant.conf

Here there is a choice one option is to put the wpa configuration in the file /etc/network/interfaces and the other is to put it in the /etc/wpa_supplicant/wpa_supplicant.conf file I selected the latter. In this case the things necessary to enter in this file are the lines after row 3 in “spa_supplicant.conf”, my file reads:

pi@RPi-2B ~ $ sudo cat /etc/wpa_supplicant/wpa_supplicant.conf

ctrl_interface=DIR=/var/run/wpa_supplicant GROUP=netdev
update_config=1

network={
        ssid="MY_SSID"
        psk="MY_WPA_PWD"
        key_mgmt=WPA-PSK
        pairwise=CCMP
        auth_alg=OPEN
}

Then i shut down the interface and restart it with the :

pi@RPi-2B ~ $ sudo ifdown wlan0
pi@RPi-2B ~ $ sudo ifup wlan0

There is an error message which is seemingly standard, but I can now check that my WiFi network is up and running. I check with:

pi@RPi-2B ~ $ ifconfig 
... irrelevant output removed...
wlan0     Link encap:Ethernet  HWaddr bc:f6:85:e7:40:0e  
          inet addr:192.168.0.32  Bcast:192.168.0.255  Mask:255.255.255.0
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:1301 errors:0 dropped:99 overruns:0 frame:0
          TX packets:409 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:263812 (257.6 KiB)  TX bytes:58604 (57.2 KiB)

that shows that the wlan0 interface has the right local IP 192.168.0.32, so the RPi-2B WiFi card is up and running…

October 25, 2014 / lasseathome

Setting up OpenVPN on a Tomato router with Tunnelblick as client

On many occasions I want to access my computers and servers behind my firewall and the normal solution was to have only the ssh port open and then work on the command line with the machines inside. A better solution is to set up a VPN and connect to it, but I though for a long time that it was too difficult as it seemd to be so much of a threshold. I have now gone into that endeavor and it was not too hard and this documents my steps. My hardware is:

  • OpenVPN Server – An ASUS RT-N16 router with Tomato by Shibby, using K26-RT-N5x-AIO (all in one) image.
  • OpenVPN Client – Mac book where I have installed the Tunnelblick-Client and run the key generation scripts.

Starting up

Prerequisites are installing Tomato on the ASUS RT-N16. Downloading and installing Tunnelblick client on the Mac. It is good to install the Tunnelblick client before the server is finished since the keys can be generated with the easy-rsa utility included with it. I guess any other OpenVPN client also includes easy-rsa.

Setting up OpenVPN on the router

In VPN Tunneling > OpenVPN Server > Basic Tab

OpenVPN-1

In VPN Tunneling > OpenVPN Server > Advanced Tab

OpenVPN-2

Generating keys for the OpenVPN Keys tab

This is where we use the fact that we are on a mac with Tunnelblick installed, since Tunnelblick brings easy-rsa in its package, we go to tunnelblick’s easy-rsa folder and edit the configuration file called “vars”. We need to “sudo edit vars” the file since root is the owner to the directory. Read README and Google helped me to understand what to write in the file. (after starting Tunnelblick I saw that in Tunnelblick’s utilities tab one can open easy-rss in a terminal directly.)

cd /Applications/Tunnelblick.app/Contents/Resources/easy-rsa-tunnelblick
sudo nano vars

Now we can start to generate some keys I followed the order according to the last lines in the README. Does a sudo bash since all commands needs to be done with root permissions in the directory.

sudo bash
export ./vars
./clean-all
./build-dh
./pkitool --initca
./pkitool --server mysever
./build-key client

This gives the following files, placed in the subdirectory keys: dh1024.pem, ca.key, ca.crt, myserver.crt, myserver.csr, myserver.key, client.crt, client.csr, client.key, and some other files. The file ca.key is private and must be stored in a safe place. The contents of these files are entered into the Tomato according to the following template:

OpenVPN-3

This was the most difficult things to do, but it was not so difficult as I thought a few weeks ago.

Setting up the Client

The startup of Tunnelblick generated a directory with a sample config file for me, “config.ovpn“, that I edited and then I added the ca.crt file and my client files client.crt and client.key, to the same directory. After that I just double clicked on the config.ovpn which starts Tunnelblick and loads this configuration file, as well as the other files. I selected to install this for my user only and not all users on the machine.

As a final comment, added after the main publication of the post, I can now confirm that it works well. I could sit in China and read and post on my Facebook account and I could also Google on the Swedish and American servers, which was not possible without the VPN connection due to the net filters active in China.

References

The following places were the main sources for this set up procedure and installation.

  1. OpenVPN Documentation.
  2. The blog by Maciej Mensfield, with essentially the same information as here but with a different path for generating keys and others.
  3. The README file in the directory for the easy-rsa included in the installation.