Hi everyone,
Brilliant thread going here! Unfortunately it hasn't helped me (if it had I wouldn't be posting!)
I have just installed 10.10, with 1x 320gb drive (Ubuntu installed here), 4x 2tb drives (three with RAID5, one as a backup until I'm happy). I created the raid array with Disk Utility through the gui. I'm not 100% sure my problems match what others are experiencing.
When I start Disk Utility, the array is not running and it is partially assembled and capacity is reported as 0k. I have to select Stop RAID Array and then Start RAID Array to get the array to start. I then have to mount the partition by clicking mount partition.
I want to autostart and automount the array on boot, but to begin with I'd just be happy with autostarting the array. I've followed the instructions here with no luck. Can someone please tell me what's going wrong and the commands to fix it?
My array uses /dev/sda1, /dev/sdb1, and /dev/sdc1, and as of right now reckons it's degraded, but all disks report as being healthy(?!?!)
Any help will be greatly appreciated! Perhaps I'm just being stupid?
Info as follows is from the array mounted, array degraded
Code:
root@HTPC:/home/media/Desktop# sudo mdadm -D -s
ARRAY /dev/md0 level=raid5 num-devices=3 metadata=01.02 name=:Array UUID=559b0f9b:13306bf7:c45d6113:750b927c
Code:
# mdadm.conf
#
# Please refer to mdadm.conf(5) for information about this file.
#
# by default, scan all partitions (/proc/partitions) for MD superblocks.
# alternatively, specify devices to scan, using wildcards if desired.
DEVICE partitions
# auto-create devices with Debian standard permissions
CREATE owner=root group=disk mode=0660 auto=yes
# automatically tag new arrays as belonging to the local system
HOMEHOST <system>
# instruct the monitoring daemon where to send mail alerts
MAILADDR root
# definitions of existing MD arrays
ARRAY /dev/md/Array level=raid5 metadata=1.2 num-devices=3 UUID=559b0f9b:13306bf7:c45d6113:750b927c name=:Array
# This file was auto-generated on Tue, 15 Feb 2011 20:05:46 +1300
# by mkconf $Id$
Code:
# /etc/fstab: static file system information.
#
# Use 'blkid -o value -s UUID' to print the universally unique identifier
# for a device; this may be used with UUID= as a more robust way to name
# devices that works even if disks are added and removed. See fstab(5).
#
# <file system> <mount point> <type> <options> <dump> <pass>
proc /proc proc nodev,noexec,nosuid 0 0
# swap was on /dev/sda5 during installation
UUID=a46edf1b-1bdd-4cf3-8ce5-98ca344a8eee none swap sw 0 0
Code:
root@HTPC:/home/media# sudo blkid
/dev/sda1: UUID="559b0f9b-1330-6bf7-c45d-6113750b927c" LABEL=":Array" TYPE="linux_raid_member"
/dev/sdb1: UUID="559b0f9b-1330-6bf7-c45d-6113750b927c" LABEL=":Array" TYPE="linux_raid_member"
/dev/sdc1: UUID="559b0f9b-1330-6bf7-c45d-6113750b927c" LABEL=":Array" TYPE="linux_raid_member"
/dev/sdd1: LABEL="Backup" UUID="39bb2373-96d6-4e89-a0dd-01eef0976a4c" TYPE="ext4"
/dev/sde1: UUID="1c3513eb-6ae0-4b1f-a817-054f57bf7f4a" TYPE="ext4"
/dev/sde5: UUID="a46edf1b-1bdd-4cf3-8ce5-98ca344a8eee" TYPE="swap"
/dev/md0p1: LABEL="Data" UUID="fb3523d0-72e7-4f5e-b8e0-fd448d5c875f" TYPE="ext4"
Code:
root@HTPC:/home/media# cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md0 : active raid5 sdb1[1] sda1[3]
3907028824 blocks super 1.2 level 5, 4k chunk, algorithm 2 [3/2] [_UU]
bitmap: 5/466 pages [20KB], 2048KB chunk
unused devices: <none>
Bookmarks