Getit-Berlin

IT-Solutions for Special Requirements

Kategorie: Server (Page 1 of 2)

zfs-auto-snapshot vs sanoid: which one is better? : zfs

Quelle: zfs-auto-snapshot vs sanoid: which one is better? : zfs

Und wieder eine interessante Frage in Verbindung mit sanoid.

Es ist wohl Zeit, endlich auch mal einen genaueren Blick darauf zu werfen.

Was ist eigentlich DogeOS?

Was ist eigentlich DogeOS?

http://www.dogeos.net/

In Anbetracht der ungewissen Zukunft von OmniOS, ist DogeOS vielleicht eine Option zum testen.

DogeOS is a distribution based on SmartOS and FIFO project. It is made to be the ultimate cloud OS for data center.

  • All industry proven features of SmartOS: ZFS, Dtrace, KVM, Zones and Crossbow.
  • Ready-to-use management console from FIFO.
  • Nearly 100% resource utilization of hardware.
  • No installation time for Resource Node (a.k.a chunter node).
  • Guided, fast (< 10min) provision of the first FiFo (management) zone, and works even without Internet access.

DogeOS is, as similar to Project FiFo and SmartOS, licensed under CDDL. It is free to use.

[OmniOS-discuss] The Future of OmniOS

The Future of OmniOS

Quelle: [OmniOS-discuss] The Future of OmniOS

Was wird die Zukunft von OmniOS bringen? Offensichtlich wird es für OmniOD kritisch. Nexenta Stor und viele andere Firmen setzen auf auf Illumos also wird es mit ZFS Storage Lösungen weiter gehen.

Sollte man vieleicht auf SmartOS oder Nexenta Community gehen? Oder gar das Orginal, Solaris pur? – stay tuned 😉

 

What is Syncoid and Openoid?

What is Syncoid and Openoid?

Policy-driven snapshot management and replication tools. Currently using ZFS for underlying next-gen storage, with explicit plans to support btrfs when btrfs becomes more reliable. Primarily intended for Linux, but BSD use is supported and reasonably frequently tested. http://www.openoid.net/products/

Syncoid works with OmniOS and could by an alternativ to nappit autosync jobs and rsync.

I prefere nappit driven zfs replication and backup jobs, but syncoid seams be worth the test.

 

Wie lange läuft ein ZFS Storagesystem?

Wie lange läuft ein ZFS Storagesystem? Hier ein Livesystem;

user@hostname> uname -a
SunOS hostname 5.8 Generic_108528-07 sun4u sparc SUNW,Ultra-2

user@hostname> uptime
  1:10pm  up 5241 day(s),  3:54,  1 user,  load average: 0.01, 0.01, 0.01

user@hostname> psrinfo -v
Status of processor 0 as of: 09/16/16 13:11:49
  Processor has been on-line since 05/12/02 09:15:42.
  The sparcv9 processor operates at 200 MHz,
        and has a sparcv9 floating point processor.
Status of processor 1 as of: 09/16/16 13:11:49
  Processor has been on-line since 05/12/02 09:15:47.
  The sparcv9 processor operates at 200 MHz,
        and has a sparcv9 floating point processor.

user@hostname> prtconf -vp | grep Mem
Memory size: 512 Megabytes

Dank an reddit: https://www.reddit.com/r/zfs/

Deleting with ZFS

Why are deletions slow?

  • Deleting a file requires a several steps. The file metadata must be marked as ‚deleted‘, and eventually it must be reclaimed so the space can be reused. ZFS is a ‚log structured filesystem‘ which performs best if you only ever create things, never delete them. The log structure means that if you delete something, there’s a gap in the log and so other data must be rearranged (defragmented) to fill the gap. This is invisible to the user but generally slow.
  • The changes must be made in such a way that if power were to fail partway through, the filesystem remains consistent. Often, this means waiting until the disk confirms that data really is on the media; for an SSD, that can take a long time (hundreds of milliseconds). The net effect of this is that there is a lot more bookkeeping (i.e. disk I/O operations).
  • All of the changes are small. Instead of reading, writing and erasing whole flash blocks (or cylinders for a magnetic disk) you need to modify a little bit of one. To do this, the hardware must read in a whole block or cylinder, modify it in memory, then write it out to the media again. This takes a long time.

What can be done?

zfs create -o compression=on -o exec=on -o setuid=off zroot/tmp ; 

chmod 1777 /zroot/tmp ; 

zfs set mountpoint=/tmp zroot/tmp

copy to /zroot/tmp

zfs delete zroot/tmp

source: http://serverfault.com/questions/801074/delete-10m-files-from-zfs-effectively

ZFS send und receive with nc

ZFS send & receive is a great feature and a good solution for remote backup. How to receive and then(!) send zfs snapshots?

Here is my codesnippet;

root@local:~# nc remote.dyndns.org 22553 | zfs receive -vd vol1
root@remoteserver:~# zfs send vol2/services/datastore_l@1 | nc -l -p 22553

works fine with OmniOS and Openindiana;

OmniOS r151018 SMB issue

Wenn man SMB produktiv mit OmniOS benutzt, dann könnte man auf folgendes Issue stoßen.

Wenn man auf das OmniOS SMB Share speichern will, mit „Speichern unter“, dann kommt es zum einfrieren der Aktion. Das betrifft beliebige Dateiengrößen.

Zwischen der OmniOS Version r151016-r151018 gabe es ein Paket, welches dieses Problem reingebracht hat. Also downgraden oder man kann das Problem lösen indem man den opslock disabled:

svccfg -s network/smb/server setprop smbd/oplock_enable=false

opslock ist Opportunistic Locking und bezieht sich auf das Sperren von Dateizugriffen, wenn mehrere auf eine Datei zugreifen.

opslock könnte man auch auf der Windows-Seite verändern (Registry), das macht aber nicht so viel Sinn.

napp-it supports HA Cluster RFS-1

RFS-1, a provider for ZFS-HA (good known and experienced with HA-Cluster nexentastor), supports OmniOS and other ZFS derivatives.

HA-Napp-IT

HA-Napp-IT

Now napp-it can manage the RFS-1 High Availability-Cluster Plugin.

Offical announcement @napp-it website:

napp-it 16.08 pro edition
 Appliance security supports RSF-1 ports 1195 and 8020
 Comstar: create and delete raw LU
 new main menu HA Cluster with RSF-1 cluster settrings
  napp-it Pro: Support for RSF-1 Cluster

Link to napp-it changelog

Greate article about IOPS by Nexenta

Nexenta blog has published a greate article about IOPS. The keyquestion is: what can i expect from my storagesystem. The article also compares 128K and 4K Blocksize and shows how this affects IOPS.

Three Dimensions of Storage Sizing & Design – Part 3: Speed

Page 1 of 2

Präsentiert von WordPress & Theme erstellt von Anders Norén