>_Cluster CentOS 7.2 – Docker Network Multi-Host – 1.6

docker-netwok-multi-hostDocker Network Multi-Host.

Nas primeiras versões do Docker a camada de rede (network) era administrada pelo (LXC – Linux Containers). A partir da versão 1.9 do Docker Engine, foi anunciada como parte da versão a própria camada de rede (network). Também foi implementado a camada de rede Multi-Host, que utiliza uma solução baseada em VXLAN com a ajuda da libnetwork e da biblioteca libkv. Assim, a rede de sobreposição iniciou um serviço de armazenamento chave/valor válido para a troca de informações entre os diferentes Hosts de Docker Engine. Com essas novas versões 1.9 e 1.10 veio a implementação de um built-in driver de rede sobreposta com base no VXLAN através da biblioteca libnetwork para suportar uma ampla gama de redes virtuais entre vários Hosts de diferentes nuvens.

Com o Docker Engine é permitido que os usuários definam suas próprias redes e que se conecte entre elas. Usando esse recurso, você pode criar uma rede em um único host ou em um rede que se estende por vários Hosts. Se você já está familiarizado com as redes Bridge’s padrões, não terá tanta dificuldade em administrar/conhecer este tipo de rede no Docker Engine.

Etcd – O Etcd é uma ferramenta que permite a configuração compartilhada e também a descoberta de serviços, criando Clusters Docker com a capacidade de compartilhar os dados de configurações.

Métodos de autenticação pelo ETCD.

Simple: Método simples de autenticação, trocando informações pela API (HTTP+JSON).
Secure: Método seguro de autenticação, Cliente precisa trocar as chaves SSL para autenticar.
Fast: Método rápido.
Reliable: Método confiavel, distribuido utilizando o Raft.

Virtual Extensible LAN (VXLAN) – É uma tecnologia de virtualização de rede que tenta melhorar os problemas de escalabilidade associados com grandes implantações de cloud computing. Ele usa uma técnica de encapsulamento VLANlike para encapsular quadros Ethernet OSI camada 2 baseados em MAC dentro dos pacotes da camada 4 UDP. O vSwitch é uma antiga implementação de VXLAN. No Docker Engine foi implemento um driver interno VXLAN que é a biblioteca libnetwork.

Overlay – Driver que gerencia as Bridges de redes (network) transformando as em redes expostas em um grupo de Hosts.

Deixando um pouco de lado a parte teórica, vamos para a parte mais interessante desse tutorial.

Para quem esta acompanhando os tutoriais sobre Cluster CentOS para Docker, este é o tutorial 1.6. Tanto o tutorial 1.5 quanto o tutorial 1.6 abordam o mesmo conceito, que é sobre agrupamento dos Hosts Docker. No tutorial 1.5 foi finalizado a instalação do Docker Engine e do Weave Network. A configuração de uma rede no Weave Network será retomada no tutorial 1.7.

Obs: Caso não esteja seguindo o tutoriais do Cluster CentOS para Docker, basta seguir dos primeiros passos este tutorial. Agora para quem já está seguindo os tutoriais, favor ignorar os passos 1,2 e 3. 

Passo 1 – Atualizando e instalando os pacotes necessários nos nodes (ba-vm-www-01 e ba-vm-www-02).

# yum update -y
# yum install ntp wget -y 
# systemct enable ntpd
# systemct start ntpd

Passo 2 – Configurando o repositório oficial do Docker Engine em ambos os nodes (ba-vm-www-01 e ba-vm-www-02).

# cd /etc/yum.repos.d/
# vim docker.repo
[dockerrepo]
name=Docker Repository
baseurl=https://yum.dockerproject.org/repo/main/centos/$releasever/
enabled=1
gpgcheck=1
gpgkey=https://yum.dockerproject.org/gpg

Passo 3 – Instalando o Docker Engine.

#  yum install docker-engine -y

Passo 4 – Instalando o ETCD.

# cd /opt
# wget  https://github.com/coreos/etcd/releases/download/v2.2.1/etcd-v2.2.1-linux-amd64.tar.gz
# tar -xvf etcd-v2.2.1-linux-amd64.tar.gz

Passo 5 – Acessando o diretório do ETCD.
OBS: Dentro desse diretório contém os scripts de criação/administração tanto da API quanto dos nodes que forem interligados.

# cd etcd-v2.2.1-linux-amd64

Criando um link simbolico do script  etcd para o diretório /usr/bin/.

ba-vm-www1.

[root@ba-vm-www1 etcd-v2.2.1-linux-amd64]# ln -s /opt/etcd-v2.2.1-linux-amd64/etcd /usr/bin/

ba-vm-www2.

[root@ba-vm-www2 etcd-v2.2.1-linux-amd64]# ln -s /opt/etcd-v2.2.1-linux-amd64/etcd /usr/bin/

Passo 6 – Criando o script connect. Este script tem como função unir os nodes (ba-vm-www1 e ba-vm-www2), formando então uma espécie de Cluster chaveado para trocar de informações via API.
ba-vm-www1

[root@ba-vm-www1 etcd-v2.2.1-linux-amd64]# vim connect
#/bin/bash

IP_HOST1=192.168.0.114
IP_HOST2=192.168.0.115

etcd -name node1 -initial-advertise-peer-urls http://$IP_HOST1:2380 \
  -listen-peer-urls http://0.0.0.0:2380 \
  -listen-client-urls http://0.0.0.0:2379,http://127.0.0.1:4001 \
  -advertise-client-urls http://0.0.0.0:2379 \
  -initial-cluster-token etcd-cluster \
  -initial-cluster node1=http://$IP_HOST1:2380,node2=http://$IP_HOST2:2380 \
  -initial-cluster-state new

Passo 7 – Ajustando o permissionamento do script connect.

[root@ba-vm-www1 etcd-v2.2.1-linux-amd64]# chmod +x connect

ba-vm-www2

[root@ba-vm-www2 etcd-v2.2.1-linux-amd64]# vim connect
#!/bin/bash 

IP_HOST1=192.168.0.114
IP_HOST2=192.168.0.115

etcd -name node2 -initial-advertise-peer-urls http://$IP_HOST2:2380 \
  -listen-peer-urls http://0.0.0.0:2380 \
  -listen-client-urls http://0.0.0.0:2379,http://127.0.0.1:4001 \
  -advertise-client-urls http://0.0.0.0:2379 \
  -initial-cluster-token etcd-cluster \
  -initial-cluster node1=http://$IP_HOST1:2380,node2=http://$IP_HOST2:2380 \
  -initial-cluster-state new
[root@ba-vm-www2 etcd-v2.2.1-linux-amd64]# chmod +x connect

Passo 8 – Criando o arquivo etcd.service. Este arquivo terá como função realizar a chamada de start e stop do serviço ETCD.
OBS: Este arquivo deverá ser criado em ambos os nodes (ba-vm-www1 e ba-vm-www2).

# vim /usr/lib/systemd/system/etcd.service
[Unit]
Description=Etcd Server

[Service]
Type=oneshot
ExecStart=/bin/bash /opt/etcd-v2.2.1-linux-amd64/etcd.sh start
ExecStop=/bin/bash /opt/etcd-v2.2.1-linux-amd64/etcd.sh stop
RemainAfterExit=yes


[Install]
WantedBy=multi-user.target

Passo 9 – Criando o script de start e stop do serviço ETCD.
OBS: Este script deverá ser criado em ambos os nodes (ba-vm-www1 e ba-vm-www2).

# vim /opt/etcd-v2.2.1-linux-amd64/etcd.sh 
#!/bin/sh
# http://tutoriaisgnulinux.com

PIDFILE=/var/run/etcd.pid
ETCD=/opt/etcd-v2.2.1-linux-amd64

case "$1" in
    start)
        if [ -f $PIDFILE ]
        then
                echo "exists, process is already running or crashed"
        else
                echo "Starting etcd server..."
                sh $ETCD/connect &
                pid=$(ps aux | grep -i "etcd -name" | grep -v grep | awk '{print $2}')
                echo $pid > $PIDFILE
        fi
        ;;
    stop)
        if [ ! -f $PIDFILE ]
        then
                rm -rf $PIDFILE  >> /dev/null 2>&1
                echo "does not exist, process is not running" 
        else
                echo "Stopping etcd server..."
                pid=$(ps aux | grep -i "etcd -name" | grep -v grep | awk '{print $2}')
                rm -rf $PIDFILE  >> /dev/null 2>&1
                kill -9 $pid >> /dev/null 2>&1
        fi
        ;;
    *)
        echo "Please use start or stop as first argument"
        ;;
esac

Passo 10 – Ajustando o permissionamento do script.
OBS: Está alteração deverá ocorrer em ambos os nodes (ba-vm-www1 e ba-vm-www2).

# chmod +x etcd.sh

Passo 11 – Habilitando o serviço etcd.service para iniciar no boot do sistema operacional.
ba-vm-www1.

[root@ba-vm-www1 ~]# systemctl enable etcd.service
Created symlink from /etc/systemd/system/multi-user.target.wants/etcd.service to /usr/lib/systemd/system/etcd.service.

ba-vm-www2.

[root@ba-vm-www2 ~]# systemctl enable etcd.service
Created symlink from /etc/systemd/system/multi-user.target.wants/etcd.service to /usr/lib/systemd/system/etcd.service.

Passo 12 – Iniciando o etcd.service em ambos os nodes (ba-vm-www1 e ba-vm-www2).

[root@ba-vm-www1 ~]# systemctl start etcd
[root@ba-vm-www2 ~]# systemctl start etcd

Passo 13 – Para verificar o Log do ETCD, bastá abrir o arquivo /var/log/messages.

# tail -f /var/log/messages
Mar  2 18:44:01 ba-vm-www1 bash: 2016-03-02 18:44:01.883636 I | rafthttp: the connection with 65368524050cc2e8 became active
Mar  2 18:44:03 ba-vm-www1 bash: 2016-03-02 18:44:03.073522 I | raft: 3cdb12359b032091 is starting a new election at term 84
Mar  2 18:44:03 ba-vm-www1 bash: 2016-03-02 18:44:03.073612 I | raft: 3cdb12359b032091 became candidate at term 85
Mar  2 18:44:03 ba-vm-www1 bash: 2016-03-02 18:44:03.073639 I | raft: 3cdb12359b032091 received vote from 3cdb12359b032091 at term 85
Mar  2 18:44:03 ba-vm-www1 bash: 2016-03-02 18:44:03.073672 I | raft: 3cdb12359b032091 [logterm: 84, index: 267] sent vote request to 65368524050cc2e8 at term 85
Mar  2 18:44:03 ba-vm-www1 bash: 2016-03-02 18:44:03.087727 I | raft: 3cdb12359b032091 received vote from 65368524050cc2e8 at term 85
Mar  2 18:44:03 ba-vm-www1 bash: 2016-03-02 18:44:03.087780 I | raft: 3cdb12359b032091 [q:2] has received 2 votes and 0 vote rejections
Mar  2 18:44:03 ba-vm-www1 bash: 2016-03-02 18:44:03.087828 I | raft: 3cdb12359b032091 became leader at term 85
Mar  2 18:44:03 ba-vm-www1 bash: 2016-03-02 18:44:03.087873 I | raft: raft.node: 3cdb12359b032091 elected leader 3cdb12359b032091 at term 85
Mar  2 18:44:03 ba-vm-www1 bash: 2016-03-02 18:44:03.106647 I | etcdserver: published {Name:node1 ClientURLs:[http://0.0.0.0:2379]} to cluster 349b5ef00aa4ec77

OBS: Utilize o arquivo /var/log/messages para troubleshooting.
Passo 14 – Iniciando o Docker Engine, passando os parâmetros do ETCD.
ba-vm-www1.

[root@ba-vm-www1 ~]# docker daemon -H tcp://0.0.0.0:2375 -H unix:///var/run/docker.sock --cluster-store=etcd://192.168.0.114:2379 --cluster-advertise=192.168.0.115:2375
WARN[0000] /!\ DON'T BIND ON ANY IP ADDRESS WITHOUT setting -tlsverify IF YOU DON'T KNOW WHAT YOU'RE DOING /!\ 
WARN[0000] devmapper: Usage of loopback devices is strongly discouraged for production use. Please use `--storage-opt dm.thinpooldev` or use `man docker` to refer to dm.thinpooldev section. 
WARN[0000] devmapper: Base device already exists and has filesystem xfs on it. User specified filesystem  will be ignored. 
INFO[0000] [graphdriver] using prior storage driver "devicemapper" 
INFO[0000] Graph migration to content-addressability took 0.00 seconds 
INFO[0000] Initializing discovery without TLS           
INFO[0000] Firewalld running: false                     
ERRO[0000] Multi-Host overlay networking requires cluster-advertise(192.168.0.115) to be configured with a local ip-address that is reachable within the cluster 
ERRO[0000] initializing serf instance failed: failed to create cluster node: Failed to start TCP listener. Err: listen tcp 192.168.0.115:7946: bind: cannot assign requested address 
INFO[0000] Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address 
INFO[0000] Loading containers: start.                   
....
INFO[0001] Loading containers: done.                    
INFO[0001] Daemon has completed initialization          
INFO[0001] Docker daemon                                 commit=c3959b1 execdriver=native-0.2 graphdriver=devicemapper version=1.10.2
INFO[0001] API listen on /var/run/docker.sock           
INFO[0001] API listen on [::]:2375                      

ba-vm-www2.

[root@ba-vm-www2 ~]# docker daemon -H tcp://0.0.0.0:2375 -H unix:///var/run/docker.sock --cluster-store=etcd://192.168.0.114:2379 --cluster-advertise=192.168.0.115:2375
WARN[0000] /!\ DON'T BIND ON ANY IP ADDRESS WITHOUT setting -tlsverify IF YOU DON'T KNOW WHAT YOU'RE DOING /!\ 
WARN[0000] devmapper: Usage of loopback devices is strongly discouraged for production use. Please use `--storage-opt dm.thinpooldev` or use `man docker` to refer to dm.thinpooldev section. 
WARN[0000] devmapper: Base device already exists and has filesystem xfs on it. User specified filesystem  will be ignored. 
INFO[0000] [graphdriver] using prior storage driver "devicemapper" 
INFO[0000] Graph migration to content-addressability took 0.00 seconds 
INFO[0000] Initializing discovery without TLS           
INFO[0000] Firewalld running: false                     
INFO[0000] 2016/03/02 18:47:38 [INFO] serf: EventMemberJoin: ba-vm-www2 192.168.0.115
 
INFO[0000] Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address 
INFO[0000] Loading containers: start.                   
....
INFO[0000] Loading containers: done.                    
INFO[0000] Daemon has completed initialization          
INFO[0000] Docker daemon                                 commit=c3959b1 execdriver=native-0.2 graphdriver=devicemapper version=1.10.2
INFO[0000] API listen on /var/run/docker.sock           
INFO[0000] API listen on [::]:2375                 

Passo 15 – Criando a primeira rede (network) multi-host, utilizando o drive overlay (Vxlan).

[root@ba-vm-www1 ~]# docker network create -d overlay rede_ba_vm_www1
f0468c64f0256567812caf4b7f5c9c84287b8865babab05e7fda8b7f194b9376

Passo 16 – Listando as redes (network’s) no node ba-vm-www2.
OBS: Perceba que a rede que foi criada no ba-vm-www1 também foi criada no ba-vm-www2.

[root@ba-vm-www2 ~]# docker network ls
NETWORK ID          NAME                DRIVER
f0468c64f025        rede_ba_vm_www1     overlay             
8ae93acec275        bridge              bridge              
97736563c247        weave               weavemesh           
07e4c93821d7        none                null                
d1d0c3ce4d81        host                host      

Passo 17 – Criando no ba-vm-www2.

[root@ba-vm-www2 ~]# docker network create -d overlay rede_ba_vm_www2
6a885d9c9694c36ce796c11cb00fdcb2c92388c6046e074176d9069582e5ef17

Passo 18 – Listando no ba-vm-www1.

[root@ba-vm-www1 ~]# docker network ls
NETWORK ID          NAME                DRIVER
6a885d9c9694        rede_ba_vm_www2     overlay             
f0468c64f025        rede_ba_vm_www1     overlay             
8e6f7c5b7670        bridge              bridge              
c851ceb4a15d        weave               weavemesh           
b5696a710193        none                null                
cb311ac7512f        host                host         

Passo 19 – Ajustando o arquivo docker.service do node ba-vm-www1.

[root@ba-vm-www1 ~]# vim /usr/lib/systemd/system/docker.service
[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target docker.socket
Requires=docker.socket

[Service]
Type=notify
ExecStart=/usr/bin/docker  daemon -H fd:// --cluster-store=etcd://192.168.0.114:2379 --cluster-advertise=192.168.0.115:2375
MountFlags=slave
LimitNOFILE=1048576
LimitNPROC=1048576
LimitCORE=infinity
TimeoutStartSec=0

[Install]
WantedBy=multi-user.target

Passo 20 – Ajustando o arquivo docker.service do node ba-vm-www2.

[root@ba-vm-www2 ~]# vim /usr/lib/systemd/system/docker.service 
[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target docker.socket
Requires=docker.socket

[Service]
Type=notify
ExecStart=/usr/bin/docker daemon -H fd:// --cluster-store=etcd://192.168.0.115:2379 --cluster-advertise=192.168.0.114:2375
MountFlags=slave
LimitNOFILE=1048576
LimitNPROC=1048576
LimitCORE=infinity
TimeoutStartSec=0

[Install]
WantedBy=multi-user.target

Passo 20 – Reiniciando o Docker Engine no node ba-vm-www1.

[root@ba-vm-www1 ~]# systemctl daemon-reload
[root@ba-vm-www1 ~]# systemctl restart docker

Passo 21 – Reiniciando o Docker Engine no node ba-vm-www2.

[root@ba-vm-www2 ~]# systemctl daemon-reload
[root@ba-vm-www2 ~]# systemctl restart docker

Passo 22 – Criando uma nova rede (network) no node ba-vm-www1.

[root@ba-vm-www1 ~]# docker network create -d overlay rede_ba_vm_www1
f0468c64f0256567812caf4b7f5c9c84287b8865babab05e7fda8b7f194b9376

Passo 23 – Criando uma nova rede (network) no node ba-vm-www2.

[root@ba-vm-www2 ~]# docker network create -d overlay rede2_ba_vm_www2
55236d3d68a6c9ab75991017092404440ae3604e2e254267877e816f4b81f565

Passo 23 – Aplicando um reboot em ambos os nodes (ba-vm-www1 e ba-vm-www2).

# reboot

Passo 24 – Validando após o reboot.

[root@ba-vm-www1 ~]# docker network create -d overlay rede3_ba_vm_www1
f6fbc1ffa82f59dc80a39f68d9f0d01c6c579239c3f82bb5881b53f27e95ad92
[root@ba-vm-www1 ~]# docker network ls
NETWORK ID          NAME                DRIVER
c851ceb4a15d        weave               weavemesh           
c976b487cfd0        bridge              bridge              
6fb8ebbd90a4        none                null                
5f8b77c7a0b7        host                host                
3f2433bc57fe        rede3_ba_vm_www2    overlay             
55236d3d68a6        rede2_ba_vm_www2    overlay             
6a885d9c9694        rede_ba_vm_www2     overlay             
8bb27a8fc354        rede2_ba_vm_www1    overlay             
f0468c64f025        rede_ba_vm_www1     overlay             
f6fbc1ffa82f        rede3_ba_vm_www1    overlay             
[root@ba-vm-www2 ~]# docker network create -d overlay rede3_ba_vm_www2
3f2433bc57fea8980560fe3e37aabd2dbead89a1041b74cc89b0853c596aed48
[root@ba-vm-www2 ~]# docker network ls
NETWORK ID          NAME                DRIVER
3f2433bc57fe        rede3_ba_vm_www2    overlay             
55236d3d68a6        rede2_ba_vm_www2    overlay             
6a885d9c9694        rede_ba_vm_www2     overlay             
8bb27a8fc354        rede2_ba_vm_www1    overlay             
f0468c64f025        rede_ba_vm_www1     overlay             
f6fbc1ffa82f        rede3_ba_vm_www1    overlay             
3ae10a7481d6        bridge              bridge              
97736563c247        weave               weavemesh           
75ddfda41bba        none                null                
c73b6025591b        host                host    

Continua no tutorial 1.7.

Fontes:
https://docs.docker.com/engine/userguide/networking/get-started-overlay/

Multi-Host Docker Networking is now ready for production


https://blog.docker.com/2015/11/docker-1-9-production-ready-swarm-multi-host-networking/

>_Cluster CentOS 7.2 – Docker Network Multi-Host – 1.6
Tagged on:     

Deixe uma resposta

O seu endereço de e-mail não será publicado. Campos obrigatórios são marcados com *

%d blogueiros gostam disto: