Showing posts with label openvz. Show all posts
Showing posts with label openvz. Show all posts

OpenVZ Cheatsheet

Show Running Containers

vzlist -a

Create Container

vzctl create $VM_ID \
--ostemplate $TEMPLATE \
--conf $CONF_TYPE \
--ipadd $IP \
--hostname $HOSTNAME
Example
vzctl create 101 \
--ostemplate ubuntu-14.04-x86_64 \
--conf basic \
--ipadd 192.168.1.1 \
--hostname client1

Destroy Container

vzctl destroy $VM_ID

Start Container

vzctl start $VM_ID

Stop Container

vzctl stop $VM_ID

Enter Container

vzctl enter $VM_ID

Set Users Password

vzctl set 101 --userpasswd $USER:$PASSWORD

Add An IP

vzctl set $VM_ID --ipadd $IP --save
This does not replace the IP, but adds one

Remove An IP

vzctl set $VM_ID --ipdel $IP --save

References

Wable - New VPS Review

Wable is a new VPS provider with a twist. Instead of buying each individual VPS, you buy a "bundle" of resources that you can distribute amongst up to as many VPS as your bundle allows.

Why Bundle's Are Better

Imagine you have a website that converts videos. This website may need three services:

  • A web frontend that just displays information to user's and allows them to upload videos, as well as tracking all the information in the database of your choice.
  • A storage service that is just responsible for storing and retrieving the videos.
  • 1+ "compute" engines that actually performs the conversions by pulling one video at a time from the storage service before converting it and sending it back.

Each of the services listed above are completely different in their resource requirements. The storage service mostly needs storage, the database needs lots of RAM and some CPU, and the compute needs all the CPU that the database isn't using. With the bundle this is easy, you can just deploy your 3+ services on their own VPS and allocate the resources as they need. On most other providers, you would have to buy one of the larger plans for each VPS in order to get enough of the resource you actually need, which costs you far more.

SSD RAID 10

Wable only uses local RAID 10 SSD storage which is the same as Digital Ocean. However the price per GB is cheaper. There is a limit of 100GB storage per VPS though, which you would have to workaround with something like GlusterFS if you need a much larger volume. Providers like AWS and Rackspace only use networked storage, except for their ephemeral drives which almost makes them pointless.

Digital Ocean Price Comparison

Below is a comparison of two plans that are as close in price as possible with Digital Ocean on the left and Wable on the right

As you can see, paying 20% more (just $1) gets you:

  • 2 x CPU access
  • 3 x bandwidth monthly utilization limit
  • 2 x the storage capacity
  • RAID 10
  • 5 additional IP addresses

The other factor to think about is the access to a 2-10 Gbps port. I need to perform further testing, but I'm pretty sure on D.O. that I was capped at 100mbit.

Concerns

Before rushing off to migrate all your infrastructure, there are a few things that you should think about. These are OpenVZ based VPS's rather than KVM (Digital Ocean) or Xen (Amazon Web Services/Rackspace). This may not be an issue for you, but you should run a trial bundle first to see if there are any kernel modules you need which aren't provided in the 2.6.32-042stab085.20 kernel that they are using at the time of this writing. The other big factors are that, being new, they have less of a reputation to judge them by, and I do not see any details relating to how much compute power you should actually expect to have. E.g. client's are limited by core access only, and there is no way of knowing how "oversold" each node is.

Pros

Price

The most important part of any product is the price, and as described earlier, their price-to-resource ratio appears to blow away the competition.

Flexibility

Wable's bundle system whereby you buy "resources" and allocate them between multiple VPS's is completely unique. This is great if you have a variety of services that have different needs. E.g. your backup service just needs lots of storage, whereas your database probably needs much more in the way of RAM and CPU.

Other providers do allow you to scale a VPS's resources individually, but at exorbitant prices. What is unique about Wable's concept, is that you buy a bundle, and then split that as you like across your servers. This probably helps keep their price's competative.

5 Second Scaling - No Reboot

It turns out that you can change the resource allocation setup of your VPS whilst it's runing (no reboot), and the process will take just 5 seconds as demonstrated in the video below:

This is incredibly useful. You can now pull resources from your other servers in order to quickly respond to demand, e.g. such as scaling up the CPU on your webserver/database when there is a burst of traffic. As soon as there is an API, you can have code monitor this for you adjust automatically. The key factor here being that there is no downtime. This cannot be done on Digital Ocean, Amazon Web Services, or Rackspace. They all require a reboot, and some resources on some services cannot be scaled.

SSD RAID 10

This is actually two points combined. Using only SSD storage on its own is a huge factor, which has been one of the selling points of Digital Ocean. With AWS/Rackspace, which use networked storage, you may notice a significant percentage of your CPU time being wasted on "disk wait". Nothing compares to using local SSD storage in terms of latency. IOPS and throughput can vary a great deal on how oversold the node is, but are also generally better. Being RAID 10 means it actually uses twice as much in physical disk storage to bring you that capacity in order to give your data redundancy protection whilst also being split across drives should hopefully help with disk throughput. It doesn't state on the site if they are using software raid, fake raid, or true hardware raid (LSI/3ware).

Disk throughput depends on so many factors, including block size, filesystem type, node contention, and the bandwidth limits on each intermediary physical layer (SATA expanders/RAID cards etc), that all you can do is benchmark, which I will do later.

Stupifying Network Speeds

Please refer to the network benchmark, but on a very simple test, I did actually achieve a 700mbit download speed. The trick is to find an external server that can match the same bandwidth levels. There is no point performing tests from within the Wable network.

Cons

OpenVZ based

Using OpenVZ basically means you share the same kernel as the host and everybody else. As some people put it, this is very much like a glorified chroot. This is unlike Xen and KVM where you can use any kernel you like. This may not seem important but if the host does not have a kernel module that you need, then bad luck. This particularly affects me because everything I create now runs on docker, and with docker, having the latest kernel really helps, especially as it is delving into areas such as BTRFS based storage engines etc.

You may also have a tough time setting up the firewall as you like, and have to go back to good-old-fashioned IP tables.

OpenVZ is perfectly fine for Minecraft servers.
Wable will not currently run docker containers. OpenVZ is generally not suitable for running docker, although I'm sure that there are a few specialists out there who managed a workaround, but this requires you to be able to update the kernel, something you cannot do as the client. If you need to run cheap docker containers, Digital Ocean is your best bet.

No Attachable Networked Storage

Unlike Amazon Web services and Rackspace, there is no attachable networked storage. However, the flexibility of allocating resources as you like from your bundle, slightly offsets this factor compared to the likes of Digital Ocean's setup. E.g. buying a larger bundle doesn't feel like a waste as you can allocate the extra CPU/RAM to your other VPS's

No API - Completely Manual

This is rather surprising since it seems that every cloud service provider has one. An API enables developers to deploy/scale their infrastructure quickly/automatically. Ideally your services should automatically scale with demand, rather than wait wait until the admin gets an alert.

America Only

This service is provided only in America, which for some of us, is the last place we want our data/services located due for legal/latency reasons. With Amazon Web Services, you can have your services deployed pretty much anywhere in the world, and Digital Ocean have datacenters in New York, San Francisco, Amsterdam (Europe), and Singapore.

How Much Compute Power Will I Actually Get?

Wable is very much in the same boat as Digital Ocean in terms of compute. They grant you access to more vCPUs but do not guarantee you any compute power. You just have to hope that the node isn't heavily oversold, and that you don't have "noisy neighbours" sucking up the processing power. This is one of the strengths of AWS EC2 as the ECU unit is a measure of your dedicated compute power, but you pay far more and actually end up with a poorer service. With a $15 budget on EC2 you know you will get a crap service, but you know exactly how crap, because it's guaranteed to be that crap.

8 VCPU per node

You cannot have more than 8 cores on an instance with Wable. This means that if your applicaton is not horizontally scaleable, you may have to go elsewhere. Digital Ocean offers an instance with access to 20 cpu cores, whereas AWS offers a 32 vCPU instance (more importantly 108 dedicated "ECUs" of power). If you need need more than 8 cores, you're probably better off with a dedicated server.

Benchmarks

Before looking at my benchmarks, it may be worth looking at the Serverbear benchmark I ran on the budle1 option.

Network

I don't consider this a comprehensive bandwidth test, as I think I may have actually been limited by the sender, or perhaps the disk throughput rate, but I just managed to download the entire Ubuntu 12 Desktop ISO in 14 seconds, which averaged out at roughly 60Megabytes per second. At one point I saw nload reach over 700 mbit/s.

Disk

When creating a 10 GiB virtual block device, with a block size of 1MB, I was able to achieve 111 MB/s.



However, I was rather surprised to find that I achieved nearly double this on my $5 Digital Ocean VPN:

Conclusion

Based on these "too good to be true" prices, this could be a very welcome addition to the VPS market. However, time will tell if the service is economically viable enough to survive and remain stable, or just push other competitors to reduce their prices. It takes time to build up a brand reputation/recognition which brings in the business-level consumers who have the real money. This price point looks to be bait for early adopters until they can reach that point. Not having an API is a major drawback, but one that I believe can be quickly resolved.

OpenVZ Guest Firewall Setup

Today I am going to show you how I secure my OpenVZ guest container which is running Ubuntu 12.04 LTS. When using OpenVZ I use iptables directly as applications such as UFW won't work for various reasons (but I'm sure someone will have a hack for it). Knowing how to use iptables is always good as it is quite powerful. Also, using UFW for managing your firewall is like using Phpmyadmin to administer your database instead of using the MySQL command line tool.

Steps

    Write the following lines to a file and execute it with bash
    #!/bin/bash
    iptables -A INPUT -i lo -j ACCEPT
    iptables -A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT
    
    # Allow our trusted IPs
    iptables -A INPUT -s [MY STATIC IP HERE]/30 -p tcp -j ACCEPT
    #iptables -A OUTPUT -d [MY STATIC IP HERE]/30 -p tcp -j ACCEPT
    
    # Allow web traffic to the public
    iptables -A INPUT -p tcp --dport 80 -m state --state NEW,ESTABLISHED -j ACCEPT
    #iptables -A OUTPUT -p tcp --sport 80 -m state --state ESTABLISHED -j ACCEPT
    
    # Allow secure web traffic to the public
    iptables -A INPUT -p tcp --dport 443 -m state --state NEW,ESTABLISHED -j ACCEPT
    #iptables -A OUTPUT -p tcp --sport 443 -m state --state ESTABLISHED -j ACCEPT
    
    iptables -P INPUT DROP
    iptables -P OUTPUT ACCEPT
    iptables -P FORWARD DROP
    Don't forget to replace the [MY STATIC IP HERE] placeholders with an IP you are going to connect with SSH from, and not the IP of the server you are configuring.
    It is probably a good idea if you add multiple trusted IPs to the Allow our trusted IPs section.
    The OUTPUT statements are commented out in case you want to switch to enabling them and changing the OUTPUT policy to DROP
    Make sure those rules are viewable if you run iptables -L
    Now we need to set up a location to store our iptables rules. Personally, I create a folder in root for everything to do with iptables but you can just save to a file anywhere as long as you adjust the path in the next step.
    sudo mkdir /root/iptables
    sudo iptables-save > /root/iptables/rules.txt
    Now add the following line to /etc/rc.local
    /sbin/iptables-restore < /root/iptables/rules.txt
    That's it! However, if you ever need a "reset" script, then you may want to add the following lines to reset.sh in the iptables folder we made.
    #!/bin/bash
    sudo iptables -X
    sudo iptables -t nat -F
    sudo iptables -t nat -X
    sudo iptables -t mangle -F
    sudo iptables -t mangle -X
    sudo iptables -P INPUT ACCEPT
    sudo iptables -P FORWARD ACCEPT
    sudo iptables -P OUTPUT ACCEPT
    sudo iptables --flush
    

References

OpenVZ - Enabling Iptables for Containers

Instructions

    change in /etc/sysconfig/iptables on the HOST from:
    IPTABLES_MODULES=""
    to
    IPTABLES_MODULES="ipt_REJECT ipt_tos ipt_TOS ipt_LOG ip_conntrack ipt_limit ipt_multiport iptable_filter iptable_mangle ipt_TCPMSS ipt_tcpmss ipt_ttl ipt_length ipt_state iptable_nat ip_nat_ftp"
    Restart the OpenVZ service with the command below. Please note that this will suspend and restart all of your containers. I had an rsync (centos mirror) running on one of them when this happened and it was continuing happily after doing this.
    /etc/init.d/vz restart
    In my experience, even after having done this, there was still no /etc/sysconfig/iptables file in CentOS containers. Also, running iptables-save and iptables-restore did not write to and read from that file. You have to manually specify the file like so:
    iptables-save > /etc/sysconfig/iptables
    iptables-restore < /etc/sysconfig/iptables

References

OpenVZ - Create Custom Ubuntu Template

Introduction

Creating a custom OpenVZ template for a debian or ubuntu is really easy. It is simply a case of deploying a base container and putting the stuff on that you want before then zipping it all back up again under a different name. Please note that these steps are not the same for if you want to create a custom CentOS template, and this tutorial was performed on a minimal ubuntu 12.04 x86 container.

    Create Container

    To create a fresh ubuntu container, simply copy the following command into a file and execute it with bash. Feel free to update the variables appropriately, btu the VM_ID must not go below 101.
    VM_ID=101
    IP=192.168.1.1
    HOSTNAME=client1
    vzctl create $VM_ID --ostemplate ubuntu-14.04-x86_64 --conf basic --ipadd $IP --hostname $HOSTNAME
    All the templates can be found here here.
    Enter the container.
    vzctl enter $VM_ID
    Perform any actions that you want to perform to make it your own custom template. For example, do what I did and update it before installing Java 7 and downloading the minecraft server jar file. All files/changes will be kept.
    Exit the container (leave it), so that now you are in the host shell.
    exit
    Prepare the container for creating a template from:
    vzctl stop $VM_ID
    vzctl set $VM_ID --ipdel all --save
    Create the new template from the container.
    cd /vz/private/$VM_ID
    NEW_TEMPLATE_NAME=custom_template
    tar --numeric-owner -czf /vz/template/cache/$NEW_TEMPLATE_NAME.tar.gz .

    Optional Additional Steps

    Feel free to now remove the old container that you created the custom template from.
    vzctl destroy $VM_ID
    rm -f /etc/vz/conf/$VM_ID.conf.destroyed
    It may be a good idea to compare the two templates sizes:
    ls -lh /vz/template/cache/*
    Deploy from your new custom template to check that it works.
    VM_ID=101
    IP=192.168.1.1
    HOSTNAME=client1
    vzctl create $VM_ID --ostemplate $NEW_TEMPLATE_NAME --conf basic --ipadd $IP --hostname $HOSTNAME

References

Centos 6.5 - Install OpenVZ

Introduction

OpenVZ has a couple of advantages over Xen. It has proved easier to set up the host so far and pretty quick to set up each virtual machine (no need to run an install process and worry about who/where/how the domU’s kernels are booting). The main advantage that I have read about so far is the ability to set allocated memory on openvz and not allow clients to spill out into swap space which kills disk IO for everyone else (which can happen in Xen). It is not a ‘true’ hypervisor (and thus cannot run windows), but has less overheads and is extremely fast and efficient.

Install Script

Copy and paste the following script into a file and execute it. Read all the output/echo statements if you want to know what it's doing.

#!/bin/bash

# BASH guard
if ! [ -n "$BASH_VERSION" ];then
    echo "this is not bash, calling self with bash....";
    SCRIPT=$(readlink -f "$0")
    /bin/bash $SCRIPT
    exit;
fi

clear
echo 'Installing OpenVZ...'

echo "updating..."
yum update -y

echo 'installing wget...'
yum install wget -y

echo 'Adding openvz Repo...'
cd /etc/yum.repos.d
wget http://download.openvz.org/openvz.repo
rpm --import http://download.openvz.org/RPM-GPG-Key-OpenVZ

echo 'Installing OpenVZ Kernel...'
yum install -y vzkernel

echo 'Installing additional tools...'
yum install vzctl vzquota ploop -y

echo 'Changing around some config files..'
sed -i 's/kernel.sysrq = 0/kernel.sysrq = 1/g' /etc/sysctl.conf

echo "Setting up packet forwarding..."
sed -i 's/net.ipv4.ip_forward = 0/net.ipv4.ip_forward = 1/g' /etc/sysctl.conf

# With vzctl 4.4 or newer there is no need to do manual configuration. Skip to #Tools_installation.
# source: http://openvz.org/Quick_installation
#echo 'net.ipv4.conf.default.proxy_arp = 0' >> /etc/sysctl.conf
#echo 'net.ipv4.conf.all.rp_filter = 1' >> /etc/sysctl.conf
#echo 'net.ipv4.conf.default.send_redirects = 1' >> /etc/sysctl.conf
#echo 'net.ipv4.conf.all.send_redirects = 0' >> /etc/sysctl.conf
#echo 'net.ipv4.icmp_echo_ignore_broadcasts=1' >> /etc/sysctl.conf
#echo 'net.ipv4.conf.default.forwarding=1' >> /etc/sysctl.conf


echo "Allowing multiple subnets to reside on the same network interface..."
sed -i 's/#NEIGHBOUR_DEVS=all/NEIGHBOUR_DEVS=all/g' /etc/vz/vz.conf
sed -i 's/NEIGHBOUR_DEVS=detect/NEIGHBOUR_DEVS=all/g' /etc/vz/vz.conf

echo "Setting container layout to default to ploop (VM in a file)..."
sed -i 's/#VE_LAYOUT=ploop/VE_LAYOUT=ploop/g' /etc/vz/vz.conf

echo "Setting Ubuntu 12.04 64bit to be the default template..."
sed -i 's/centos-6-x86/ubuntu-12.04-x86_64/g' /etc/vz/vz.conf

echo 'Purging your sys configs...'
sysctl -p

echo "Disabling selinux..."
sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/sysconfig/selinux

echo "disabling iptables..."
/etc/init.d/iptables stop && chkconfig iptables off

clear

echo "OpenVZ Is now Installed. "
echo "Please reboot into the openvz kernel to start using it."
echo "Programster"
This script was last tested on the 15th August 2014

Start Your First Container

    Create a virtual machine with a command like such (I build command in an sh file before running)
    vzctl create $unique-id-for-vm \ --ostemplate $template-name-here \ --conf $configuration-name-here \ --ipadd $ip-address-of-vm \ --onboot yes \ --hostname $hostname-of-vm
    Here is an example already filled out:
    vzctl create 101 \
    --ostemplate centos-6-x86_64 \
    --conf basic \
    --ipadd 192.168.1.43 \
    --hostname centos1
    
    OR
    vzctl create 101 \
    --ostemplate ubuntu-14.04-x86_64 \
    --conf basic \
    --ipadd 192.168.1.43 \
    --hostname ubuntu1
    
    The root password will be the same as the host machine unless you change it using passwd inside the machine, or by issuing the following command:
    vzctl set {CTID} --userpasswd {user}:{password} --save
    To start the virtual machine that you have created, run:
    vzctl start
    The machine could automatically connect to google dns (8.8.8.8) but had to manually set nameserver.
    echo nameserver 8.8.8.8 > /etc/resolv.conf
    Restart the network for the nameserver to take effect:
    service network restart

References