IT books recommended of the month

I just want to recommend you some of the latest and better IT books (DRM-free available at packt) that I have read recently:

The first is Mastering Proxmo(Wasim Ahmed). Proxmox is based on KVM virtualization and container-based virtualization and manages virtual machines, storage, virtualized networks, and HA Clustering.

The book covers the basic topics with good examples and I only miss more information about ZFS.

But it worths the cost because is a perfect guide for beginners and advanced users that explains you all of the topics that you need for have a complete Proxmox cluster and show a lot of real scenarios with different architectures at the end of the book.

When you read this book you are able to manage a complete Proxmox cluster with Ceph storage cluster.

The second recommendation for all of you that are interested in DevOps culture and automation tools is Configuration management with Chef Solo  (Naveed ur Rahman) that has been reviewed for my colleague Jorge Moratilla and covers an easy and useful way to manage your platform without client-server architecture.

Ideal for people with basic ruby and sysadmin knowledge to progress with automation tools if you know the basic topics about thata dn you have some experience with this kind of tools.

 

CloudFormation and VPC real examples Part I

AWS CloudFormation is a very useful DevOps tool for deploy and update a template and its associated collection of resources (called a stack) . It is very useful to create your complete AWS infrastructure in some regions (remember the disaster recovery plan!), with a control version management (it is json code!) and create,update and destroy the stack in a confidence way.

I recommend you to read this presentation that is a complete and very good briefing about AWS CloudFormation features:

I recommend you strongly to check the templates available at http://aws.amazon.com/cloudformation/aws-cloudformation-templates/

This is a real script (Source code available at https://github.com/juanviz/AwsScripts/tree/master/cloudformation) for create the infrastructure of an environment  that is composed by:

  •  NATSecurityGroup
  •  PrivateSubnet
  •  PublicSubnet
  •  LoadBalancerSecurityGroup
  •  PrivateSubnetRouteTableAssociation
  •  PrivateSubnetNetworkAclAssociation
  •  PublicSubnetRouteTableAssociation
  •  PublicSubnetNetworkAclAssociation
  •  NATDevice
  •  ElasticLoadBalancer
  •  PrivateRoute
  •  NATIPAddress
  •  ElasticLoadBalancerBackend

The script can be executed from AWS web interface, https://console.aws.amazon.com/cloudformation, or with CFN tools from shell, http://aws.amazon.com/developertools/2555753788650372.

Note: the script assumes that you have already created a VPC with its route tables and acls. A second post will explain the file with the creation of the whole infrastructure.

{
 "AWSTemplateFormatVersion" : "2013-07-10",
 "Description" : "Template which creates a public and private subnet for a new environment with its required resources included NAT instance and Load Balancer with AppCookieStickinessPolicy",

#### Parameters block defines the inputs that the operator has to fill with the right values for the environments resources that we are going to create
#### For example the keypair of the NAT instance.

"Parameters" : {
 "InstanceType" : {
 "Description" : "NAT instance type",
 "Type" : "String",
 "Default" : "m1.small",
 "AllowedValues" : [ "t1.micro","m1.small","m1.medium","m1.large","m1.xlarge","m2.xlarge","m2.2xlarge","m2.4xlarge","m3.xlarge","m3.2xlarge","c1.medium","c1.xlarge","cc1.4xlarge","cc2.8xlarge","cg1.4xlarge"],
 "ConstraintDescription" : "must be a valid EC2 instance type."
 },
 "WebServerPort" : {
 "Description" : "TCP/IP port of the balanced web server",
 "Type" : "String",
 "Default" : "80"
 },
 "AppCookieName" : {
 "Description" : "Name of the cookie for ELB AppCookieStickinessPolicy",
 "Type" : "String",
 "Default" : "Juanvi"
 },
 "KeyName" : {
 "Description" : "Name of an existing EC2 KeyPair to enable SSH access to the NAT instance",
 "Type" : "String"
 }
 },

#### Data used in the creation of the NAT instance. The arch (32 or 64 bits) depends on the size chosen for the instance
 "Mappings" : {
 "AWSInstanceType2Arch" : {
 "t1.micro" : { "Arch" : "32" },
 "m1.small" : { "Arch" : "64" },
 "m1.medium" : { "Arch" : "64" },
 "m1.large" : { "Arch" : "64" },
 "m1.xlarge" : { "Arch" : "64" },
 "m2.xlarge" : { "Arch" : "64" },
 "m2.2xlarge" : { "Arch" : "64" },
 "m2.4xlarge" : { "Arch" : "64" },
 "m3.xlarge" : { "Arch" : "64" },
 "m3.2xlarge" : { "Arch" : "64" },
 "c1.medium" : { "Arch" : "64" },
 "c1.xlarge" : { "Arch" : "64" }
 },
#### Data used in the creation of the NAT instance. The AMI ID depends on the region chosen for the instance

 "AWSRegionArch2AMI" : {
 "eu-west-1" : { "32" : "ami-1de2d969", "64" : "ami-1de2d969", "64HVM" : "NOT_YET_SUPPORTED" }
 }
 },
#### Here begins the definition of the resources that we want to create
 "Resources" : {
#### Creation of the public subnet
 "PublicSubnet" : {
 "Type" : "AWS::EC2::Subnet",
 "Properties" : {
 "VpcId" : "yourvpcid",
 "CidrBlock" : "172.20.1.32/27",
 "Tags" : [
 {"Key" : "Application", "Value" : { "Ref" : "AWS::StackId"} },
 {"Key" : "Network", "Value" : "Public" }
 ]
 }
 },
#### Assignation of the existing route table created manually to public subnet
 "PublicSubnetRouteTableAssociation" : {
 "Type" : "AWS::EC2::SubnetRouteTableAssociation",
 "Properties" : {
 "SubnetId" : { "Ref" : "PublicSubnet" },
 "RouteTableId" : "rtb-xxxxx"
 }
 },
#### Assignation of the existing ACL table created manually to public subnet

 "PublicSubnetNetworkAclAssociation" : {
 "Type" : "AWS::EC2::SubnetNetworkAclAssociation",
 "Properties" : {
 "SubnetId" : { "Ref" : "PublicSubnet" },
 "NetworkAclId" : "acl-xxxxxx"
 }
 },
#### Creation of the private subnet
 "PrivateSubnet" : {
 "Type" : "AWS::EC2::Subnet",
 "Properties" : {
 "VpcId" : "vpc-a31de1c8",
 "CidrBlock" : "172.20.65.0/24",
 "Tags" : [
 {"Key" : "Application", "Value" : { "Ref" : "AWS::StackId"} },
 {"Key" : "Network", "Value" : "Private" }
 ]
 }
 },
#### Assignation of the existing route table created manually to private subnet

 "PrivateSubnetRouteTableAssociation" : {
 "Type" : "AWS::EC2::SubnetRouteTableAssociation",
 "Properties" : {
 "SubnetId" : { "Ref" : "PrivateSubnet" },
 "RouteTableId" : "rtb-baa892d1"
 }
 },
#### Assignation of the existing ACL table created manually to private subnet

 "PrivateSubnetNetworkAclAssociation" : {
 "Type" : "AWS::EC2::SubnetNetworkAclAssociation",
 "Properties" : {
 "SubnetId" : { "Ref" : "PrivateSubnet" },
 "NetworkAclId" : "acl-xxxxxxx"
 }
 },
#### Added new route in private route table for send all of the public traffic to internet through the recent created NAT instance
 "PrivateRoute" : {
 "Type" : "AWS::EC2::Route",
 "Properties" : {
 "RouteTableId" : "rtb-xxxxx",
 "DestinationCidrBlock" : "0.0.0.0/0",
 "InstanceId" : { "Ref" : "NATDevice" }
 }
 },
##### Creation of the ELB for balance traffic to front web servers, created in the public subnet previously created
 "ElasticLoadBalancer" : {

 "Type" : "AWS::ElasticLoadBalancing::LoadBalancer",
 "Properties" : {
 "SecurityGroups" : [ { "Ref" : "LoadBalancerSecurityGroup" } ],
 "Subnets" : [ { "Ref" : "PublicSubnet" } ],
 "AppCookieStickinessPolicy" : [{
 "PolicyName" : "BooksLBPolicy",
 "CookieName" : { "Ref" : "AppCookieName" }
 } ],
 "Listeners" : [ {
 "LoadBalancerPort" : "80",
 "InstancePort" : { "Ref" : "WebServerPort" },
 "Protocol" : "HTTP",
 "PolicyNames" : [ "BooksLBPolicy" ]
 } ],
 "HealthCheck" : {
 "Target" : { "Fn::Join" : [ "", ["HTTP:", { "Ref" : "WebServerPort" }, "/"]]},
 "HealthyThreshold" : "3",
 "UnhealthyThreshold" : "5",
 "Interval" : "30",
 "Timeout" : "5"
 }
 }
 },
##### Creation of the ELB for balance traffic to front web servers, created in the private subnet previously created

 "ElasticLoadBalancerBackend" : {

 "Type" : "AWS::ElasticLoadBalancing::LoadBalancer",
 "Properties" : {
 "SecurityGroups" : [ { "Ref" : "LoadBalancerSecurityGroup" } ],
 "Subnets" : [ { "Ref" : "PrivateSubnet" } ],
 "AppCookieStickinessPolicy" : [{
 "PolicyName" : "BooksLBPolicy",
 "CookieName" : { "Ref" : "AppCookieName" }
 } ],
 "Listeners" : [ {
 "LoadBalancerPort" : "80",
 "InstancePort" : { "Ref" : "WebServerPort" },
 "Protocol" : "HTTP",
 "PolicyNames" : [ "BooksLBPolicy" ]
 } ],
 "HealthCheck" : {
 "Target" : { "Fn::Join" : [ "", ["HTTP:", { "Ref" : "WebServerPort" }, "/"]]},
 "HealthyThreshold" : "3",
 "UnhealthyThreshold" : "5",
 "Interval" : "30",
 "Timeout" : "5"
 }
 }
 },
#### Elastic ip attached to previously created NAT instance
 "NATIPAddress" : {
 "Type" : "AWS::EC2::EIP",
 "Properties" : {
 "Domain" : "vpc",
 "InstanceId" : { "Ref" : "NATDevice" }
 }
 },
#### Creation of the NAT instance
 "NATDevice" : {
 "Type" : "AWS::EC2::Instance",
 "Properties" : {
 "InstanceType" : { "Ref" : "InstanceType" },
 "KeyName" : { "Ref" : "KeyName" },
 "SubnetId" : { "Ref" : "PublicSubnet" },
 "SourceDestCheck" : "false",
 "ImageId" : { "Fn::FindInMap" : [ "AWSRegionArch2AMI", { "Ref" : "AWS::Region" },
 { "Fn::FindInMap" : [ "AWSInstanceType2Arch", { "Ref" : "InstanceType" }, "Arch" ] } ] },
 "Tags" : [
 {"Key" : "Name", "Value" : "nat.public-staging-books" }
 ],
 "SecurityGroupIds" : [{ "Ref" : "NATSecurityGroup" }]
 }
 },
#### Creation of the security group for NAT instance previously created
 "NATSecurityGroup" : {
 "Type" : "AWS::EC2::SecurityGroup",
 "Properties" : {
 "GroupDescription" : "Enable internal access to the staging NAT device",
 "VpcId" : "vpc-a31de1c8",
 "SecurityGroupIngress" : [
 { "IpProtocol" : "tcp", "FromPort" : "80", "ToPort" : "80", "CidrIp" : "0.0.0.0/0"} ,
 { "IpProtocol" : "tcp", "FromPort" : "443", "ToPort" : "443", "CidrIp" : "0.0.0.0/0"} ,
 { "IpProtocol" : "tcp", "FromPort" : "22", "ToPort" : "22", "CidrIp" : "0.0.0.0/0"} ],
 "SecurityGroupEgress" : [
 { "IpProtocol" : "tcp", "FromPort" : "80", "ToPort" : "80", "CidrIp" : "0.0.0.0/0"} ,
 { "IpProtocol" : "tcp", "FromPort" : "443", "ToPort" : "443", "CidrIp" : "0.0.0.0/0"} ,
 { "IpProtocol" : "udp", "FromPort" : "53", "ToPort" : "53", "CidrIp" : "0.0.0.0/0"} ]
 }
 },
#### Creation of the security group for ELB previously created
 "LoadBalancerSecurityGroup" : {
 "Type" : "AWS::EC2::SecurityGroup",
 "Properties" : {
 "GroupDescription" : "Enable HTTP access on port 80 and 443",
 "VpcId" : "vpc-a31de1c8",
 "SecurityGroupIngress" : [ { "IpProtocol" : "tcp", "FromPort" : "80", "ToPort" : "80", "CidrIp" : "0.0.0.0/0" } ],
 "SecurityGroupEgress" : [ { "IpProtocol" : "tcp", "FromPort" : "80", "ToPort" : "80", "CidrIp" : "0.0.0.0/0"} ],
 "SecurityGroupIngress" : [ { "IpProtocol" : "tcp", "FromPort" : "443", "ToPort" : "443", "CidrIp" : "0.0.0.0/0" } ],
 "SecurityGroupEgress" : [ { "IpProtocol" : "tcp", "FromPort" : "443", "ToPort" : "443", "CidrIp" : "0.0.0.0/0"} ]
 }
 }
 },
#### Outputs parameters as result of the stack's execution that we want to save as stack's information
 "Outputs" : {
 "URL" : {
 "Description" : "URL of the website",
 "Value" : { "Fn::Join" : [ "", [ "http://", { "Fn::GetAtt" : [ "ElasticLoadBalancer", "DNSName" ]}]]}
 }
 }
 }

Highlights about Barcelona DevOps Days 2013

I have been in Barcelona DevOps Days 10th and 11st of October and I want to share some thoughts about the topics that we have discussed there.

My favourite presentation was “The DevOps Pay Raise: Quantifying Your Value to Move Up the Ladder”  by Tom Levey, an amazing speaker.

In this presentation he explains how to show the benefits to DevOps culture inside the company in a understandable way for business people (money). I keep in my mind some kpis that he showed us to correlate between them and show how a bad performance of a site  (that is a in the most of the cases consequence of a bad management of development, deployment, test and release process)  can impact in your benefits and how in the 95% of the cases for example a bad time response because errors from a bad release causes lower revenues from the site that means less benefits.

Another useful and interesting presentation was “Devops road at Tuenti” by Oscar San José and Victor García about how they have developed and implemented some tools that are driving DevOps culture inside the company, making easy for developers to deploy and test their code, providing information about all stages and ensuring higher code quality by automating testing and other tedious, risky tasks, such as code versioning, tagging, build and distribution to app markets. Also helps Product Owners to track the lifecycle of a feature and everything can be tracked from the same place by everyone in the company.

For people that are working in a non-IT company and wants to introduce the DevOps culture I recommend strongly the presentation “Even a classical retail company can go DevOps… With success”  by Sylvain Loubradou that explains the long and difficult way done in a classical and quite normal IT department in a classical company towards the DevOps philosophy.

And finally I have to recommend “The Enemies of Continuous Delivery” by  Isa Vilacides that showed us the keys to achieve a successful continuous delivery process with her very useful QA point of view that DevOps culture needs.

I want to stand out something that was very interesting but bad organised; the open spaces, where we have discussed about some hot topics as configuration management and SQL updates that in some cases are very difficult to automate in a reliable way.

About ignite talks I can tell you that it is a real challenge for speakers because they only had 5 minutes to explain 20 slides with 15 seconds per slide. Take a look to “Poka­yoke and DevOps” by Ulf Månsson.

I can tell you that these kind of events are a must for exchange knowledge and discover some things that can improve a lot our day-to-day work and the performance of our business.

Finally I want to thank a lot all of the effort made by Rhommel Lamas and all of the sponsors (http://www.strsistemas.comhttp://www.moritz.comhttp://www.rakuten.co.jphttp://www.ca.com/es/lpg/appvelocity/home.aspxhttp://www.tid.eshttp://www.3scale.net, http://www.github.com) by make possible this event to Spain for the first time.

See you in the next DevOps days!

Immutable servers with Vagrant and Chef – Chapter I

1.     Requirements

Ruby

https://www.ruby-lang.org/en/

VirtualBox

https://www.virtualbox.org/

EC2 tools (Optional)

http://juanvicenteherrera.eu/2012/02/21/first-steps-with-ec2-api-tools/

Knife

https://learnchef.opscode.com/quickstart/workstation-setup/

Vagrant

$ gem install vagrant

Vagrant plugins

https://github.com/mitchellh/vagrant-aws

https://github.com/schisamo/vagrant-omnibus

https://github.com/cassianoleal/vagrant-butcher

$ vagrant plugin install vagrant-aws
$ vagrant plugin install vagrant-omnibus
$ vagrant plugin install vagrant-butcher

2.     Tests

Vagrant first steps in workstation with VirtualBox

$ vagrant box add base http://files.vagrantup.com/lucid32.box
$ vagrant init
$ vagrant up

Vagrant  test  with manual chef bootstrap

$  vagrant box add CentOS-6.4 http://developer.nrel.gov/downloads/vagrant-boxes/CentOS-6.4-x86_64-v20130309.box
$  vagrant init opscode-ubuntu-1204 https://opscode-vm-bento.s3.amazonaws.com/vagrant/opscode_ubuntu-12.04-i386_chef-11.4.4.box --no-color
$  vagrant up --no-color
$  knife bootstrap localhost  --ssh-user ec2-user   --ssh-password vagrant   --ssh-port 2222   --run-list "recipe[apache2]"   --sudo
$  vagrant ssh

  Vagrant ec2 test 2 without chef management

$ vagrant plugin install vagrant-aws
$ vagrant box add dummy https://github.com/mitchellh/vagrant-aws/raw/master/dummy.box
$ vagrant up --provider=aws

3.    POC Vagrant/EC2/Chef

Briefing

Base Box: The base box is simply a saved hard-disk of a Virtual Machine created with VirtualBox. It can contain anything but it needs at least :

  • Ruby
  • VirtualBox guest additions
  • Puppet or Chef

In  the case that use AWS provider the box format is basically just the required metadata.json file along with a Vagrantfile that does default settings for the provider-specific configuration for this provider.

Box used in the example

Vagrant AWS Example Box

Vagrant providers each require a custom provider-specific box format. These files compose the contents of a box for the aws provider.

  • README.md
  • Vagrantfile
  • metadata.json

To turn this into a box:

$ tar cvzf aws.box ./metadata.json ./Vagrantfile

This box works by using Vagrant’s built-in Vagrantfile merging to setup defaults for AWS. These defaults can easily be overwritten by higher-level Vagrantfiles (such as project root Vagrantfiles).

Vagrant basic commands

vagrant init :

creates a file called ‘Vagrantfile’ in your current directory

when you look at the file, it will contain the directive …

config.vm.box = "base"

this is what makes the link to the box we called ‘base’

you can further edit the Vagrantfile before you start it

vagrant up:

up until now, no virtual machine was created

therefore vagrant will import the disks from the box ‘base’ into Virtualbox(not applied for AWS, the instance that is going to be created has EBS disks already configured)

map via NAT the port 22 from your VM to a free local port

it will create a .vagrant file : a file that contains a mapping between your description ‘base’ and the UUID of the virtual machine

vagrant ssh:

this will lookup the mapping of the ssh inside and will execute the SSH process to log into the machine use a privatekey of use vagrant to login to a box that has the user vagrant with it’s public setup in the virtual machine

Vagrant commands

$ vagrant
Tasks:
vagrant box                        # Commands to manage system boxes
vagrant destroy                    # Destroy the environment, deleting the created virtual machines
vagrant halt                       # Halt the running VMs in the environment
vagrant help [TASK]                # Describe available tasks or one specific task
vagrant init [box_name] [box_url]  # Initializes the current folder for Vagrant usage
vagrant package                    # Package a Vagrant environment for distribution
vagrant provision                  # Rerun the provisioning scripts on a running VM
vagrant reload                     # Reload the environment, halting it then restarting it.
vagrant resume                     # Resume a suspended Vagrant environment.
vagrant ssh                        # SSH into the currently running Vagrant environment.
vagrant ssh_config                 # outputs .ssh/config valid syntax for connecting to this environment via ssh
vagrant status                     # Shows the status of the current Vagrant environment.
vagrant suspend                    # Suspend a running Vagrant environment.
vagrant up                         # Creates the Vagrant environment
vagrant version                    # Prints the Vagrant version information

POC

Add dummy AWS box

$ vagrant box add dummy https://github.com/mitchellh/vagrant-aws/raw/master/dummy.box

Vagrant example with 1 ec2 instance and configuration managed by Chef

Vagrantfile


Vagrant.configure("2") do |config|

config.vm.box = "dummy"

config.omnibus.chef_version = "11.6.0"

config.vm.provider :aws do |aws, override|

aws.keypair_name = "bq"

aws.access_key_id     = " xxxxxx "

aws.secret_access_key = " xxxxxxxxxxxxxxxxxx "

aws.ami = "ami-3ad1af53"

override.ssh.username = "ec2-user"

override.ssh.private_key_path = "/Users/juanvi/keypairs/bq.pem"

end

config.vm.provision :chef_client do |chef|

chef.chef_server_url = "https://api.opscode.com/organizations/juanvi"

chef.validation_key_path = "/Users/juanvi/chef-repo/.chef/juanvi-validator-new.pem"

chef.validation_client_name = "juanvi-validator"

# Provision with the database role

chef.add_role("webserver")

# Set the environment for the chef server

chef.environment = "dev"

end

end

Notes: The ami used as template is a standard AMI 64 bits EBS backed http://aws.amazon.com/amazon-linux-ami/ with sudo permissions for vagrant user.

 

Vagrant example with 2 instances in ec2 and configuration managed by Chef


Vagrant.configure("2") do |config|

config.vm.define :web do |web|

web.vm.box = "dummy"

web.vm.provider :aws do |aws, override|

aws.keypair_name = "bq"

config.omnibus.chef_version = "11.6.0"

aws.access_key_id     = " xxxxxx "

aws.secret_access_key = " xxxxxxxxxxxxxxxxxx "

aws.ami = "ami-3ad1af53"

override.ssh.username = "ec2-user"

override.ssh.private_key_path = "/Users/juanvi/keypairs/bq.pem"

end

web.vm.provision :chef_client do |chef|

chef.chef_server_url = "https://api.opscode.com/organizations/juanvi"

chef.validation_key_path = "/Users/juanvi/chef-repo/.chef/juanvi-validator-new.pem"

chef.validation_client_name = "juanvi-validator"

#  Provision with the database role

chef.add_role("webserver")

# Set the environment for the chef server

chef.environment = "dev"

end

end

config.vm.define :db do |db|

db.vm.box = "dummy"

db.vm.provider :aws do |aws, override|

aws.keypair_name = "bq"

config.omnibus.chef_version = "11.6.0"

aws.access_key_id     = " xxxxxx "

aws.secret_access_key = " xxxxxx "

aws.ami = "ami-3ad1af53"

override.ssh.username = "ec2-user"

override.ssh.private_key_path = "/Users/juanvi/keypairs/bq.pem"

end

db.vm.provision :chef_client do |chef|

chef.chef_server_url = "https://api.opscode.com/organizations/juanvi"

chef.validation_key_path = "/Users/juanvi/chef-repo/.chef/juanvi-validator-new.pem"

chef.validation_client_name = "juanvi-validator"

#  Provision with the database role

chef.add_role("db")

# Set the environment for the chef server

chef.environment = "dev"

end

end

end

Provisioning only one kind of server executing chef_client

$ vagrant provision web --provision-with chef_client

Provisioning the whole platform executing chef_client

$ vagrant provision --provision-with chef_client

After editing the Vagrantfile you need to ‘reboot’ the machine to take this settings

$ vagrant reload

Version control

Now is a good time to version control your vagrant project

$ cd
$ git init
$ git add Vagrantfile
$ git commit -m "First version of Vagranfile"

Cleanup

Install the following plugin for vagrant:

$ vagrant plugin install vagrant-butcher

add this line to your Vagrantfile:


config.butcher.knife_config_file = '.chef/knife.rb'

then you can terminate an instance and deregister in the Chef server executing:

$ vagrant destroy -f
[Butcher] knife.rb location set to '/path/to/knife.rb'
[Butcher] Chef node 'node_name' successfully butchered from the server...
[Butcher] Chef client 'node_name' successfully butchered from the server...
[default] Forcing shutdown of VM...
[default] Destroying VM and associated drives...

4.    Tips

  • Enabling different settings based on environment

Setting these variables is as easy as prepending them to the vagrant command

$ vagrant_env=development vagrant up
  • Create a config file in ruby with the aws credentials and include in all of the vagrant files to have only one configuration file:

require File.expand_path('~/.chef/aws_credentials.rb')

inside Vagrantfile you can use these variables with:


access_key_id=<strong>@access_key_id</strong>

  • I recommend you have all the chef certificates in your ~/.chef folder  and a knife.rb file per project with specific configuration(path to chef certificates and ec2 keypairs in your ~/.chef folder)
  • If you include .vagrant folder in your git repositor all the team can manage the vagrant environment
  • Create a base chef role with all of the common tools needed in all of the servers (basic tools)
{
"name": "web-base",
"description": "Base role applied to all nodes.",
"run_list":[
 "recipe[git]",
 "recipe[build-essential]",
 "recipe[vim]",
 "recipe[aws_hostname]",
 "recipe[aws_hostname::register-dns]"
],
"override_attributes":{},
"chef_type":"role"

}

5.     Some  observations

  • It clearly helps everybody to have a consistent environment to develop against, the latest version is just one git pull away.
  • The central approach drives people to a) do frequent commits and b) do stable commits.
  • The task of writing recipes is not picked up by all team members, and seem to stay the main job of the system oriented people on the team.
  • Reading coobooks help people understand what is needed and makes it easy to point out what needs to be changed. But learning the skills to write recipes/manifest is a blocking factor just as having a backend developer writing frontend code.
  • The test the admins have to do before committing their cookbooks, is that they destroy their own ‘development’ box and re-provision a new box to see if this works.
  • The longer the provision takes, the less frequent people do it. It’s important to keep that process as fast as possible. It’s all about feedback and we want it fast.
  • Installation problems would get noticed far sooner in the process.

People would only do a full rebuild in the morning when getting their coffee.

6.     Conclusions

  • Vagrant is an essential part of the DevOps  process: it is the solution to developing, testing and deploying in the same environment. It thus ensures a smoother transition of your project from the dev team to the ops team.
  • Vagrant is EASY. And it is compatible with Chef.

7.     References and resources

You can check the cookbook used in these examples and all of the chef resources ( https://github.com/juanviz/chef-jv/blob/master/cookbooks/wiki_app/README.md and https://github.com/juanviz/chef-jv/) and more Vagrant examples (https://github.com/juanviz/VagrantProjects)

http://docs.vagrantup.com/v2/cli/index.html

http://red-badger.com/blog/2013/02/21/automating-your-infrastructure-with-vagrant-chef-from-development-to-the-cloud/

http://docs-v1.vagrantup.com/v1/docs/provisioners/chef_server.html

http://www.jasongrimes.org/2012/06/managing-lamp-environments-with-chef-vagrant-and-ec2-1-of-3/

http://docs-v1.vagrantup.com/v1/docs/getting-started/provisioning.html

http://beacon.wharton.upenn.edu/404/2013/04/starting-the-server-build/

DevOps resources

I want to share with you  some interesting and useful resources that allow you to introduce and learn about DevOps best practices. If you know more resources that can be useful please share them with all of us!

Podcasts

The Food Fight Show ->  http://foodfightshow.org/

DevOps Cafe -> http://devopscafe.org/

Blogs and Groups

http://dev2ops.org

http://devops.com/

http://devopsangle.com/

http://www.opscode.com/blog/

https://puppetlabs.com/blog/

https://sites.google.com/site/madridevops/

https://plus.google.com/u/0/communities/115849774871368603888

http://www.appdynamics.com/blog/

http://blog.devopsguys.com

http://www.devopsdays.org/