OpenStack Components

Transcript

OpenStack Components
OpenStack
Dott. Luca Tasquier
E-mail: [email protected]
OpenStack
OpenStack (http://www.openstack.org/) is a cloud operating system that
controls large pools of compute, storage, and networking resources throughout
a datacenter, all managed through a dashboard that gives administrators
control while empowering their users to provision resources through a web
interface.
Predominantly acting as an infrastructure as a service (IaaS) platform, it is free
and open-source software released under the terms of the Apache License.
The project is managed by the OpenStack Foundation, a non-profit corporate
entity established in September 2012 to promote OpenStack software and its
community.
More than 200 companies joined the project, including Arista Networks, AT&T,
AMD, Brocade Communications Systems, Canonical, Cisco, Dell, EMC,
Ericsson, Groupe Bull, HP, IBM, Inktank, Intel, NEC, NetApp, Nexenta,
Rackspace Hosting, Red Hat, SUSE Linux, VMware, and Yahoo!.
OpenStack
The technology consists of a series of interrelated projects that control pools of
processing, storage, and networking resources throughout a datacenter, able
to be managed or provisioned through a web-based dashboard, command-line
tools, or a RESTful API.
OpenStack Components –
Compute
OpenStack Compute (Nova) is a cloud computing fabric controller (the main
part of an IaaS system). It is written in Python and uses many external libraries
such as Eventlet (for concurrent programming), Kombu (for AMQP
communication), and SQLAlchemy (for database access). Compute's
architecture is designed to scale horizontally on standard hardware with no
proprietary hardware or software requirements and provide the ability to
integrate with legacy systems and third party technologies. It is designed to
manage and automate pools of computer resources and can work with widely
available virtualization technologies, as well as bare metal and highperformance computing (HPC) configurations. KVM and XenServer are
available choices for hypervisor technology, together with Hyper-V and Linux
container technology such as LXC.
OpenStack Components –
Object Storage
OpenStack Object Storage (Swift) is a scalable redundant storage system.
Objects and files are written to multiple disk drives spread throughout servers
in the data center, with the OpenStack software responsible for ensuring data
replication and integrity across the cluster. Storage clusters scale horizontally
simply by adding new servers. Should a server or hard drive fail, OpenStack
replicates its content from other active nodes to new locations in the cluster.
Because OpenStack uses software logic to ensure data replication and
distribution across different devices, inexpensive commodity hard drives and
servers can be used.
OpenStack Components –
Block Storage
OpenStack Block Storage (Cinder) provides persistent block-level storage
devices for use with OpenStack compute instances. The block storage system
manages the creation, attaching and detaching of the block devices to servers.
Block storage volumes are fully integrated into OpenStack Compute and the
Dashboard allowing for cloud users to manage their own storage needs. In
addition to local Linux server storage, it can use storage platforms including
Ceph, CloudByte, Coraid, EMC (VMAX and VNX), GlusterFS, IBM Storage
(Storwize family, SAN Volume Controller, XIV Storage System, and GPFS),
Linux LIO, NetApp, Nexenta, Scality, SolidFire and HP (StoreVirtual and
StoreServ 3Par families). Block storage is appropriate for performance
sensitive scenarios such as database storage, expandable file systems, or
providing a server with access to raw block level storage. Snapshot
management provides powerful functionality for backing up data stored on
block storage volumes. Snapshots can be restored or used to create a new
block storage volume.
OpenStack Components –
Networking
OpenStack Networking (Neutron, formerly Quantum) is a system for managing
networks and IP addresses. Like other aspects of the cloud operating system,
it can be used by administrators and users to increase the value of existing
datacenter assets. OpenStack Networking ensures the network will not be the
bottleneck or limiting factor in a cloud deployment and gives users real selfservice, even over their network configurations.
OpenStack Networking provides networking models for different applications or
user groups. Standard models include flat networks or VLANs for separation of
servers and traffic. OpenStack Networking manages IP addresses, allowing for
dedicated static IPs or DHCP. Floating IPs allow traffic to be dynamically
rerouted to any of your compute resources, which allows you to redirect traffic
during maintenance or in the case of failure. Users can create their own
networks, control traffic and connect servers and devices to one or more
networks.
OpenStack Components –
Dashboard
OpenStack Dashboard (Horizon) provides administrators and users a graphical
interface to access, provision and automate cloud-based resources. The
design allows for third party products and services, such as billing, monitoring
and additional management tools. The dashboard is also brandable for service
providers and other commercial vendors who want to make use of it.
The dashboard is just one way to interact with OpenStack resources.
Developers can automate access or build tools to manage their resources
using the native OpenStack API or the EC2 compatibility API.
OpenStack Components –
Identity Service
OpenStack Identity (Keystone) provides a central directory of users mapped to
the OpenStack services they can access. It acts as a common authentication
system across the cloud operating system and can integrate with existing
backend directory services like LDAP. It supports multiple forms of
authentication including standard username and password credentials, tokenbased systems and AWS-style (i.e. Amazon Web Services) logins. Additionally,
the catalog provides a queryable list of all of the services deployed in an
OpenStack cloud in a single registry. Users and third-party tools can
programmatically determine which resources they can access.
OpenStack Components –
Image Service
OpenStack Image Service (Glance) provides discovery, registration and
delivery services for disk and server images. Stored images can be used as a
template. It can also be used to store and catalog an unlimited number of
backups. The Image Service can store disk and server images in a variety of
back-ends, including OpenStack Object Storage. The Image Service API
provides a standard REST interface for querying information about disk images
and lets clients stream the images to new servers.
OpenStack Components –
Telemetry
OpenStack Telemetry Service (Ceilometer) provides a Single Point Of Contact
for billing systems, providing all the counters they need to establish customer
billing, across all current and future OpenStack components. The delivery of
counters is traceable and auditable, the counters must be easily extensible to
support new projects, and agents doing data collections should be independent
of the overall system.
OpenStack Components –
Orchestration
Heat is a service to orchestrate multiple composite cloud applications using
templates, through both an OpenStack-native ReST API and a
CloudFormation-compatible Query API.
Amazon Web Services
compatibility
OpenStack APIs are compatible with Amazon EC2 and Amazon S3 and thus
client applications written for Amazon Web Services can be used with
OpenStack with minimal porting effort.
OpenStack single-node
installation
Requisiti hardware minimali
- 4GB di RAM;
- 1 Interfaccia di rete;
- Estensioni hardware alla virtualizzazione.
Installazione - 1
- Installare CentOS 6.4 (Desktop è più user-friendly)
- Abilitare la rete (se non abilitata)
- Aggiornare tutti i pacchetti: $ sudo yum update -y
Installazione - 2
- Installare ntpd:
$ sudo yum install -y ntp
$ sudo service ntpd start
$ sudo chkconfig ntpd on
Su tutti i nodi compute settare il client NTP per avere
sincronizzazione tra controller e nodi. Aggiungere un cron job
giornaliero inserendo un file, di cui l’owner è root ed eseguibile,
in /etc/cron.daily/ntpdate contenente:
ntpdate <hostname or IP address of controller>
hwclock -w
Installazione - 3
- Disabilitare SELinux: $ sudo vim /etc/selinux/config
modificare la riga contenente SELINUX=... con
SELINUX=disabled
Salvare e riavviare la macchina.
Installazione - 4
- Installare KVM:
Per prima cosa bisogna verificare di avere una macchina che abbia il supporto
alla virtualizzazione hardware e che quest’ultimo sia attivo:
$ egrep -i ‘vmx|svm’ --color=always /proc/cpuinfo
Se vengono prodotti risultati dal comando ed i flag colorati contengono vmx
(Intel VT) oppure svm (AMD-V), allora il supporto alla virtualizzazione hardware
è presente ed attivo.
Installare e porre in esecuzione KVM e virtinst:
$ sudo yum install -y kvm libvirt python-virtinst qemu-kvm
$ sudo service libvirtd start
$ sudo chkconfig libvirtd on
Verificare che KVM sia installato correttamente:
$ sudo virsh -c qemu:///system list
Se non vengono visualizzati errori, KVM è installato correttamente.
Installazione - 5
- Installare il bridge per VM Networking:
L’installazione di KVM così com’è non permette alle VMs di
comunicare tra di loro o di accedere a reti esterne. Per permettere
questo bisogna configurare un bridge:
$ sudo yum install -y bridge-utils
Disabilitare il NetworkManager e switchare sul manager di reti di
default:
$ sudo service NetworkManager stop
$ sudo chkconfig NetworkManager off
$ sudo chkconfig network on
$ sudo service network start
Installazione - 6
Editare il file di configurazione ifcfg-eth0 ($ sudo vim
/etc/sysconfig/network-scripts/ifcfg-eth0) come
segue:
DEVICE=eth0
TYPE=Ethernet
ONBOOT=yes
NM_CONTROLLED=no
BRIDGE=br0
Installazione - 7
Creare un file ifcfg-br0 ($ sudo touch
/etc/sysconfig/network-scripts/ifcfg-br0) ed editarlo
come segue:
DEVICE=br0
ONBOOT=yes
TYPE=Bridge
NM_CONTROLLED=no
BOOTPROTO=none
IPADDR=$ip_address
NETMASK=$netmask
GATEWAY=$gateway
DNS1=$dns1
DNS2=$dns2
Installazione – 8
Riavviare il servizio di rete: $ sudo service network
restart
Verificare la configurazione di rete: $ ifconfig
Installazione – 9
- Risoluzione problema spazio disponibile: CentOS crea due
partizioni separate per root ed home. La partizione di root è molto
più piccola della partizione home, ma tutta l’installazione di
OpenStack (e anche degli altri servizi) risiederà nella cartella /var,
andando a lungo andare a saturare la partizione pur avendo la
maggior parte del disco libera. Per risolvere il problema si deve
copiare /var in una cartella di home, eliminare /var (se ci sono file
che non è possibile eliminare, rinominare la restante cartella /var in
/var_old) e creare un link simbolico alla nuova cartella /home/var in
/var:
$ sudo mkdir /home/var
$ sudo cp -prv /var /home/var
$ sudo rm -rf /var
$ sudo ln -s /home/var /var
$ sudo reboot
Installazione – 10
- Installare OpenStack RDO: installazione senza il gestore del
networking (Neutron)
$ sudo yum install -y
http://rdo.fedorapeople.org/rdo-release.rpm
$ sudo yum install -y openstack-packstack
$ packstack --allinone --os-neutron-install=n
Al termine dell’installazione riavviare.
Installazione – 11
- Accesso alla dashboard: Provare ad accedere alla dashboard
(Horizon) utilizzando l’endpoint fornito (l’indirizzo del bridge br0:
http://CONTROL_NODE/dashboard . Se si ha errore di tipo “500
Internal Error” installare PIP e PBR:
$ sudo yum install -y python-pip
$ sudo pip install pbr
Per accedere alla dashboard utilizzare come Username admin e come
Password quella generata all’interno del file keystone_admin:
$ cat ~/keystone_admin
Per utilizzare i servizi di OpenStack da linea di comando, fare il source
di keystone_admin:
$ source ~/keystone_admin
Installazione – 11
- Accesso alla dashboard: Provare ad accedere alla dashboard
(Horizon) utilizzando l’endpoint fornito (l’indirizzo del bridge br0:
http://CONTROL_NODE/dashboard . Se si ha errore di tipo “500
Internal Error” installare PIP e PBR:
$ sudo yum install -y python-pip
$ sudo pip install pbr
Per accedere alla dashboard utilizzare come Username admin e come
Password quella generata all’interno del file keystone_admin:
$ cat ~/keystone_admin
Per utilizzare i servizi di OpenStack da linea di comando, fare il source
di keystone_admin:
$ source ~/keystone_admin
Utilizzo – 1
- Abilitare SSH sul default security group: Project->Access &
Security->Manage Compute. Sotto l’header Security
Groups, cliccare su Edit Rules per il security group default.
Cliccare su Add Rule e inserire 22 nel campo Port. Aggiungere
la regola.
- Importare un key pair: recuperare la chiave pubblica generata da
ssh
$ cat ~/.ssh/id_rsa.pub
Copiare l’output del comando, andare sulla dashboard ed in
Access & Security cliccare sul tab Keypairs e poi su Import
Keypair. Scegliere un nome da inserire nel campo Name e
incollare la chiave precedentemente copiata nel campo Key.
Utilizzo – 2
- Creare un’immagine (esempio Fedora19): Manage Compute>Image & Snapshots->Create Image. In Name inserire
Fedora19, in Image Location inserire
http://cloud.fedoraproject.org/fedora19.x86_64.qcow2, QCOW2 in Format. Selezionare Public per
rendere visibile l’immagine agli altri progetti, Protected per
inibire la cancellazione (e altre modifiche) dell’immagine da parte
di altri utenti appartenenti ad altri progetti.
N.B.: altre immagini precaricate sono disponibili su
http://openstack.redhat.com/Image_resources
Utilizzo – 3
- Lanciare un’istanza: in Images selezionare l’immagine e cliccare
su Launch. Scegliere un nome per l’istanza, il flavor (tiny, small,
medium, large…), selezionare il security group (se ce n’è più di
uno) e creare l’istanza.
- Accedere all’istanza: $ ssh -i ~/.ssh/id_rsa
utente@ip_locale_istanza.
Se ssh da problemi con identità duplicate per lo stesso indirizzo IP,
eliminare il file known_hosts:
$ rm ~/.ssh/known_hosts
Utilizzo – 4
- Collegare un volume (testato con immagine Fedora19): sui nodi compute ci
deve essere sysfsutils. Dalla dashboard creare un volume selezionando i
GB da allocare. Far partire l’istanza dell’immagine, andare nella scheda
relativa ai volumi, selezionare il volume e cliccare su Edit Attachments:
inserire il device del volume (es. /dev/vdb). Fare un hard-reboot dell’istanza
(utilizzando il menu a tendina More) relativo all’istanza selezionata e collegarsi
utilizzando SSH. Verificare le partizioni disponibili:
$ cat /proc/partitions
Se tutto è andato a buon fine nell’elenco apparirà vdb. Formattare la partizione
(ad es. con ext3) e montare il volume su una cartella:
$ sudo mkfs.ext3 /dev/vdb
$ mkdir $path_cartella_volume
$ sudo mount /dev/vdb $path_cartella_volume
N.B.: ai successivi utilizzi del volume, anche da parte di altre immagini, non è
necessario riformattare. Il contenuto del volume è persistente.
Utilizzo – 5
- Utilizzare l’Object Storage: per abilitare la creazione di container
anche per gli utenti member, editare il file /etc/swift/proxyserver.conf e aggiungere il ruolo _member_ (e/o Member)
nella parte del file dove si trova la stringa operator_roles =
admin, SwiftOperator, in modo da ottenere il seguente
risultato:
[filter:keystone]
use = egg:swift#keystoneauth
operator_roles = _member_, admin, SwiftOperator
is_admin = true
cache = swift.cache
Utilizzo – 6
Salvare il file e riavviare i servizi di account e proxy di swift:
$ sudo service openstack-swift-account restart
$ sudo service openstack-swift-proxy restart
Andare sulla dashboard e creare un container: Object Store>Containers->Create Container. Una volta creato il
container è possibile uploadare i file, scaricarli, eliminarli ecc.