[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Happiness is...



Thanks for the info.  I noticed in a previous email, you said " It'd be a lot better if/when Red Hat
integrates it into the mainstream."  Do you think this will eventually happen, or do you think Red Hat wants nothing to do with supporting zfs?  I've considered using zfs on my NAS at home (currently running Scientific Linux 6.4), but I don't really want to rebuild it and I only have 4 GB of memory in it and from what I understand, you need lots of RAM to get the full benefit of zfs.  Right now, I have the OS on one disk and then I have four 320 GB disks that are in a RAID 0 that are storing the data and I have a USB 2.0 1 TB external hard drive attached that stores the backups that rsynch handles for me.  All my partitions are formatted as ext4.

Kevin


On Mon, Sep 16, 2013 at 5:39 AM, Robert G. (Doc) Savage <dsavage@peaknet.net> wrote:
Kevin,

To give you a better idea of what I mean by "very fast", I'm restoring
from a 4T SATA3 drive connected to an external SATA2 port (3GB/sec).
Large image files (>10GB) stream at a rate of 27MB/sec. Since 1pm
yesterday rsync has restored some 1,240,000 files totaling 1.22TB so
far.

--Doc

On Sun, 2013-09-15 at 18:59 -0500, Robert G. (Doc) Savage wrote:
> Kevin,
>
> It's very fast, even for writes. It'd be a lot better if/when Red Hat
> integrates it into the mainstream. Allowing the hashed block table to
> reside on a SSD rather than in ungodly sized motherboard RAM would make
> deduplication accessible and affordable to everyone.
>
> --Doc
>
> On Sun, 2013-09-15 at 14:42 -0500, Kevin Thomas wrote:
> > Do you feel that running zfs on Linux has been worth the trouble?
> >
> >
> > Kevin
> >
> > On Sunday, September 15, 2013, Robert G. (Doc) Savage wrote:
> >         ... having a freshly made full backup of a ZFS array when it
> >         faults out
> >         and must be rebuilt from scratch. I'd saved the two zpool and
> >         zfs create
> >         command lines as scripts. All I needed to use them was to
> >         zeroize all
> >         nine drives (several hours) and then run:
> >
> >         # zpool destroy pub
> >
> >         I'm now rsyncing 1,250,000 files back to the newly created
> >         array.
> >         That'll probably take a day or two.
> >
> >         --Doc
>
>
>
>
> -
> To unsubscribe, send email to majordomo@silug.org with
> "unsubscribe silug-discuss" in the body.
>



-
To unsubscribe, send email to majordomo@silug.org with
"unsubscribe silug-discuss" in the body.