PROJET AUTOBLOG


Planet-Libre

source: Planet-Libre

⇐ retour index

Morot : Puppet : automatiser la construction d’un volume GlusterFS répliqué

jeudi 14 septembre 2017 à 22:45

Puppet : automatiser la construction d’un volume GlusterFS répliqué

Je vais présenter rapidement comment créer le code Puppet adapté à la création d’un cluster à deux noeuds avec deux bricks répliqués. Pour la démonstration, j’aurais deux VM Ubuntu 16.04 et les bricks seront sur deux disques durs de 8 Go.

Préparation du Puppet Master

Installation des modules nécessaires :

# puppet module install puppetlabs-lvm
Notice: Preparing to install into /etc/puppetlabs/code/environments/production/modules ...
Notice: Downloading from https://forgeapi.puppet.com ...
Notice: Installing -- do not interrupt ...
/etc/puppetlabs/code/environments/production/modules
└─┬ puppetlabs-lvm (v0.9.0)
  └── puppetlabs-stdlib (v4.20.0)

# puppet module install puppet-gluster
Notice: Preparing to install into /etc/puppetlabs/code/environments/production/modules ...
Notice: Downloading from https://forgeapi.puppet.com ...
Notice: Installing -- do not interrupt ...
/etc/puppetlabs/code/environments/production/modules
└─┬ puppet-gluster (v3.0.0)
  └─┬ puppetlabs-apt (v2.4.0)
    └── puppetlabs-stdlib (v4.20.0)

Notre arborescence pour le module se trouvera dans ce répertoire :

# mkdir -p /etc/puppetlabs/code/environments/production/modules/glustersrv/manifests
 

Préparation des disques

Fichier /etc/puppetlabs/code/environments/production/modules/glustersrv/manifests/lvm.pp :
class glustersrv::brick {

# Dépendances :
  package { "xfsprogs": ensure=> present }
  package { "lvm2": ensure=> present }

# On créé notre LVM :
  physical_volume { '/dev/sdb':
    ensure => present,
  }

  volume_group { 'vg-gluster':
    ensure           => present,
    physical_volumes => '/dev/sdb',
  }

  logical_volume { 'lv-bricks':
    ensure       => present,
    volume_group => 'vg-gluster',
      size         => '7.9G',
  }

  filesystem { '/dev/vg-gluster/lv-bricks':
    ensure  => present,
    fs_type => 'xfs',
  }

# Le volume LVM est automatiquement monté
  mount { '/data/glusterfs/vol0':
    name => '/data/glusterfs/vol0',
    ensure => 'mounted',
    atboot => 'true',
    device => '/dev/vg-gluster/lv-bricks',
    fstype   => 'xfs',
    options   => 'defaults',
    dump   => 1,
    pass => 0,
  }
}

Création du volume répliqué

Fichier /etc/puppetlabs/code/environments/production/modules/glustersrv/manifests/node.pp :

class glustersrv::node {

  file { '/data/glusterfs/vol0/brick0':
    ensure => 'directory',
  }

  package { "glusterfs-server": ensure => 'present' }

  service { 'glusterfs-server':
    ensure => running,
    enable => true,
    hasrestart => true,
  }

  gluster::volume { 'repl-vol':
    replica => 2,
    bricks  => [
      'gluster0.morot.test:/data/glusterfs/vol0/brick0',
      'gluster1.morot.test:/data/glusterfs/vol0/brick0',
    ],
  }
}

Affectation des classes

Fichier /etc/puppetlabs/code/environments/production/manifests/site.pp :

node 'gluster0' {
  include system
  include glustersrv::lvm
  gluster::peer { 'gluster1.morot.test':
    pool => 'production',
  }
  include glustersrv::node
}

node 'gluster1' {
  include system
  include glustersrv::lvm
  gluster::peer { 'gluster0.morot.test':
    pool => 'production',
  }
  include glustersrv::node
}

On vérifie

root@gluster1:~# gluster volume status
Status of volume: repl-vol
Gluster process                             TCP Port  RDMA Port  Online  Pid
------------------------------------------------------------------------------
Brick gluster0.morot.test:/data/glusterfs/v
ol0/brick0                                  49152     0          Y       7037
Brick gluster1.morot.test:/data/glusterfs/v
ol0/brick0                                  49152     0          Y       3817
NFS Server on localhost                     N/A       N/A        N       N/A
Self-heal Daemon on localhost               N/A       N/A        Y       3844
NFS Server on 192.168.69.70                 N/A       N/A        N       N/A
Self-heal Daemon on 192.168.69.70           N/A       N/A        Y       7063

Task Status of Volume repl-vol
------------------------------------------------------------------------------
There are no active volume tasks

Gravatar de Morot
Original post of Morot.Votez pour ce billet sur Planet Libre.