Results 1 to 22 of 22

Hybrid View

  1. #1
    Join Date
    Apr 2006
    Posts
    931

    ZFS for VM storage in production environments

    Hi, I've been watching for years that ZFS is used in FreeNAS but do not know anyone who has used for virtualization in production environments.What is your experience with the port ZFSonLinux? I've been testing with the aim to reuse some old servers without RAID controller (Supermicro X8DTi, Dual Xeon X5670 and 6 SSD drives in ZFS RAID10). I have 8GB of RAM allocated to ZFS and 16GB for Zvol. In my tests ZFS has been reached over 2GB/s for 100GB files on writing tests with dd.I know that ZFS bases its operation on memory, but as it does to get these results if I have limited RAM and ZVOL to a smaller size?Thank you.

  2. #2
    Join Date
    Oct 2003
    Location
    The Netherlands
    Posts
    1,271
    Hi,

    This is something of great interest to me, but I've decided that ZOL just isn't fully there yet for production.
    As for your question. You can never have enough RAM for ZFS, but since you work on SSD's exclusively I think the value of the ARC is a bit limited.
    I'm guessing you are getting the 2 GB/s on the strength of those SSD's (Which ones are you using if you don't mind me asking?)

    One of the things you'll run in to with ZOL is that it doesn't support TRIM (I believe only FreeBSD does at this point)

  3. #3
    Join Date
    Apr 2006
    Posts
    931
    The difficulty is to know the recommended limit for a production environment without any performance penalty, I made the following test.

    - Create a KVM VM (Proxmox) with unlimited I/O and executed stress test for hard drives (command stress -d 60). Host iowait remained above 2%.

    - I perform a writing test of 100GB file on the host and the bandwidth ranged from 1.5GB / s to 90MB / s so there are a bottleneck somewhere.

    It's a shame not to use ZFS, would be a good way to reuse hardware that still offers good performance and in the case of these Supermicro motherboards support up to 192GB of memory.

  4. #4
    Quote Originally Posted by skywin View Post
    Hi, I've been watching for years that ZFS is used in FreeNAS but do not know anyone who has used for virtualization in production environments.
    What about using SmarOS or some other Solaris derivative? I mean ZFS is not exactly a "fringe" technology since big, high performance, production Oracle environments use it.

    Also, if you are using decent SSDs and not commodity consumer drives, modern garbage collection is excellent.
    My site dedicated to server and workstation hardware: http://www.servethehome.com

  5. #5
    Join Date
    Sep 2006
    Location
    Michigan
    Posts
    66
    Quote Originally Posted by pjkenned View Post
    What about using SmarOS or some other Solaris derivative? I mean ZFS is not exactly a "fringe" technology since big, high performance, production Oracle environments use it.

    Also, if you are using decent SSDs and not commodity consumer drives, modern garbage collection is excellent.
    SmartOS is a great platform. At MNX.io we've switched to SmartOS (from Onapp) running ZFS and are very pleased with the performance.
    MNX.io - 100% SSD Cloud Hosting, and Linux server management.

  6. #6
    Join Date
    Oct 2003
    Location
    The Netherlands
    Posts
    1,271
    Quote Originally Posted by nwilkens View Post
    SmartOS is a great platform. At MNX.io we've switched to SmartOS (from Onapp) running ZFS and are very pleased with the performance.
    I was thinking of trying this on one of my older machines. The list of tech looks impressive, but I'm not sure about the HW support.
    Would definitely love to get my VPS backed by ZFS. The snapshots and send / receive are really awesome tools

    Quote Originally Posted by cyberhouse View Post
    why not use it on a OS its been used on for years like freebsd ?
    Not sure how far Behyve is, but I would miss the KVM hypervisor

  7. #7
    Join Date
    Feb 2002
    Location
    New York, NY
    Posts
    4,618
    Quote Originally Posted by barry[CoffeeSprout] View Post
    Would definitely love to get my VPS backed by ZFS. The snapshots and send / receive are really awesome tools
    My ZFS experience is with FreeBSD (and earlier on Solaris), but I've talked to several ZoL users, and they seem to think that it's quite usable now. If you want to do a more advanced virtualization setup, you could use Xen with a FreeBSD-based storage domain. FreeBSD has a Xen blkback driver, so it can export devices back to dom0 or other VMs. If your server has an IOMMU, then you can even give FreeBSD direct access to your HBA.


    Quote Originally Posted by barry[CoffeeSprout] View Post
    Not sure how far Behyve is, but I would miss the KVM hypervisor
    Bhyve has come a long way. If you're familiar with KVM, then you shouldn't have much trouble using Bhyve.

    https://wiki.freebsd.org/bhyve
    Scott Burns, President
    BQ Internet Corporation
    Remote Rsync and FTP backup solutions
    *** http://www.bqbackup.com/ ***

  8. #8
    Join Date
    Sep 2004
    Location
    Cluj-Napoca, Romania
    Posts
    504
    ZFS on Linux used to have some nasty bug that it would eat all RAM and some more. It's fixed 2 (?) versions ago.

    We started using ZFS on Linux with KVM and OpenStack 2-3 months ago and it's all good.

    We knew another provider that has used it in thw last 12 and 18 months and we got informed about all the bumps.

    We also used Solaris and FreeBSD before.

    We still have a backup server with FreeBSD where we backup CPanel accounts over iSCSI and benefit of snapshots, compression, deduplication. Many problems with this server, but might be hardware caused.

  9. #9
    Join Date
    Mar 2012
    Posts
    1,425
    i like this thread. nice. finally something of interest.
    --

  10. #10
    Join Date
    Mar 2003
    Location
    chicago
    Posts
    1,790
    why not use it on a OS its been used on for years like freebsd ?

  11. #11
    Join Date
    Oct 2003
    Location
    The Netherlands
    Posts
    1,271
    Thanks Scott, valuable feedback.

    There was a new release of ZOL, but feedback has been a bit mixed. I contacted somebody who did various tests on KVM performance between ZFS, BTRFS, EXT4 etc (Read his helpful article here: http://www.ilsistemista.net/index.ph...omparison.html )
    He did mention that he would be reluctant to use ZOL in production for now, but then again I would be reluctant offering a stranger on the internet any advice as well.

    I didn't know Xen could do that, that's interesting!

    Also, Bhyve does look interesting. I'm guessing it will see a lot of love with Release 11 as well

  12. #12
    Join Date
    Apr 2006
    Posts
    931
    Barry, very interesting comparison but it is not very useful evaluate ZFS without a good amount of RAM allocated or SSD cache for L2ARC because their main advantages over other file systems are eliminated.

  13. #13
    Join Date
    Oct 2003
    Location
    The Netherlands
    Posts
    1,271
    Well that would be one of the advantages of ZFS over the others. Still it's good to compare on the exact same HW, but I agree that I would include more memory and a L2ARC for doing VPS since that's where a lot of your extra performance will be coming from. Not sure if this still applies, but one of the issues with ZOL is that ARC and Linux disk cache are separate, meaning you'll have some duplication.

  14. #14
    We run 40+ production VMs on FreeNAS with zero issues. All VMs are on ESXi via iSCSI, over 6+ months without a single hiccup. The FreeNAS GUI just makes admin much easier I think. Give it plenty of RAM and ZFS will perform excellent, and you can always tweek with L2ARC or ZOL if needed.

  15. #15
    Join Date
    Apr 2004
    Location
    Singapore
    Posts
    1,234
    We run production VM (VMware or KVM) connected to FreeBSD/ZFS storage via NFS, works fine for years.

    Only use compression and snapshot feature, no deduplication.

    For best performance:
    1 SSD for read cache (can be any SSD with large capacity)
    1 SSD for zil/slog (as write journaling), using Intel DC S3700 SSD, use the best performance SSD you can get and with super capacitor for data protection.

    The data disk can be SATA or SSD up to your choices.

    Alan
    Alan Woo, alan [@] ne.com.sg
    = NewMedia Express Pte Ltd (AS38001)
    = IP Transit, Colocation & Dedicated Servers in Singapore | Hong Kong | Tokyo | Seoul | Jakarta |
    = Singapore Speedtest speedtest.sg

  16. #16
    Join Date
    Oct 2006
    Location
    Colorado Springs, CO
    Posts
    78
    Similar to others, we have a Xen cluster with shared storage from a FreeNAS unit. It's a 24-bay SuperMicro affair and we built it like so:

    20x 2TB SATA 7200RPM drives (Seagate cosntellations if I recall)
    2x Intel dc3700 100GB SSDs (supercaps) for ZIL (redundant)
    2x Samsung 840 512GB SSDs for L2 ARC
    A LSI2308 based sata controller, a 10gbe card, misc fixins

    We got a lot of our know-how from zfsbuild.com. This SAN has been particularly fast and easy to maintain. Only hiccup we had was that every 35 days, the following Sunday morning, the automatic ZFS scrub would take place and randomly stall I/O to some Linux VMs, taking them offline. We now do those scrubs manually every 60 days during scheduled maintenance.

    The downside is that it's an expensive piece of kit to build, and its performance does not distribute/scale super well, so this beast is probably the last of its kind for us.
    Randal Kohutek | Ops Guy
    Data102 - Colorado's Public Datacenter
    Virtual Private Servers | Colocation | VoIP Hosted Handsets | Email Firewalls | And More!
    www.Data102.com | (888)-Data102 | Call us Today!

  17. #17
    Join Date
    Jul 2003
    Location
    Waterloo, Ontario
    Posts
    1,135
    This thread came at the perfect time, I've looked into setting up a ZFS NAS for a while now and finally freed up some time to get into it again

    I've been looking for some JBOD SuperMicro chassis if anybody has a recommendation for something low cost. I have some 800GB Intel S3700 that I want to use for ZIL/L2ARC and some 16 core AMD chips that I can throw in there to test out compression and/or deduplication.

    If anybody can make some suggestions, I'd be happy to collaborate!
    Right Servers Inc. - Fully Managed VPS | Fully Managed Bare Metal Servers. Hosting for entrepreneurs, by entrepreneurs. Features you can't ignore: High Availability| VMWare| NVMe| DirectAdmin & Softaculous Premium| BitNinja A Security Suite| Daily Backups| 24/7 Monitoring| Software Installation, Configuration & troubleshooting| Website Migrations | 30 Day Money Back Guarantee

  18. #18
    Join Date
    Apr 2006
    Posts
    931
    Hello, in your experience is good idea this ZFS setup under SATA 3Gb interface?

    4 x 3TB RAID10
    2 x 256GB SSD RAID1 (for L2ARC and ZIL)

  19. #19
    Join Date
    Apr 2004
    Location
    Singapore
    Posts
    1,234
    Ideally the SSD shall connect to 6gbps sata, as some SSD can easily hit 500MB/s but 3Gbps is fine, unless you want very high sequential throughput.
    Alan Woo, alan [@] ne.com.sg
    = NewMedia Express Pte Ltd (AS38001)
    = IP Transit, Colocation & Dedicated Servers in Singapore | Hong Kong | Tokyo | Seoul | Jakarta |
    = Singapore Speedtest speedtest.sg

  20. #20
    Join Date
    Apr 2006
    Posts
    931
    What do you think of use single SSD MLC PCIe and use it for L2ARC and ZIL?

  21. #21
    Join Date
    Jan 2003
    Location
    Budapest, Hungary
    Posts
    236
    1 don't use single SSD for L2ARC and ZIL altogether. It might become a bottleneck.
    1 device per each function.
    ARC is a priority in space give it a lot. ZIL does not use much so around 8GB will be enough.
    ServerAstra.com website / e-mail: info @ serverastra.com
    HU/EU Co-Location / Managed and Unmanaged Cloud & Dedicated servers in Hungary with unmetered connections

  22. #22
    Join Date
    Apr 2006
    Posts
    931
    I will use in my setup a new Intel 750 SSD PCIe, 430k iops read and 230k write, should be enough.

Similar Threads

  1. Your experience with OpenQRM in production environments
    By Infinitnet in forum Cloud Hosting
    Replies: 4
    Last Post: 12-13-2011, 02:25 PM
  2. Openstack in production environment
    By ukiroo in forum Cloud Hosting
    Replies: 7
    Last Post: 06-18-2011, 06:08 AM
  3. Recommendation for Monitoring & Configuration in colo environment
    By EastCoast in forum Colocation, Data Centers, IP Space and Networks
    Replies: 1
    Last Post: 06-16-2011, 03:35 PM
  4. BurstNet vps in production environment, recommended ?
    By galacticzilla in forum VPS Hosting
    Replies: 36
    Last Post: 05-11-2010, 08:20 AM
  5. Blue Quartz in Production Environment
    By huck in forum Dedicated Server
    Replies: 0
    Last Post: 11-22-2004, 08:43 AM

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •